text
stringlengths 105
4.17k
| source
stringclasses 883
values |
---|---|
Without loss of generality, we may assume that if , then is also a member of . Additionally, if then we may add to E and then set the .
If two nodes in are distinguished – one as the source and the other as the sink – then is called a flow network.
## Flows
Flow functions model the net flow of units between pairs of nodes, and are useful when asking questions such as what is the maximum number of units that can be transferred from the source node s to the sink node t? The amount of flow between two nodes is used to represent the net amount of units being transferred from one node to the other.
The excess function represents the net flow entering a given node (i.e. the sum of the flows entering ) and is defined by
$$
x_f(u)=\sum_{w \in V} f(w,u) - \sum_{w \in V} f(u, w).
$$
A node is said to be active if (i.e. the node consumes flow), deficient if (i.e. the node produces flow), or conserving if . In flow networks, the source is deficient, and the sink is active.
Pseudo-flows, feasible flows, and pre-flows are all examples of flow functions.
|
https://en.wikipedia.org/wiki/Flow_network
|
In flow networks, the source is deficient, and the sink is active.
Pseudo-flows, feasible flows, and pre-flows are all examples of flow functions.
A pseudo-flow is a function of each edge in the network that satisfies the following two constraints for all nodes and :
Skew symmetry constraint: The flow on an arc from to is equivalent to the negation of the flow on the arc from to , that is: . The sign of the flow indicates the flow's direction.
Capacity constraint: An arc's flow cannot exceed its capacity, that is: .
A pre-flow is a pseudo-flow that, for all , satisfies the additional constraint:
Non-deficient flows: The net flow entering the node is non-negative, except for the source, which "produces" flow. That is: for all .
A feasible flow, or just a flow, is a pseudo-flow that, for all , satisfies the additional constraint:
Flow conservation constraint: The total net flow entering a node is zero for all nodes in the network except the source and the sink , that is: for all .
|
https://en.wikipedia.org/wiki/Flow_network
|
That is: for all .
A feasible flow, or just a flow, is a pseudo-flow that, for all , satisfies the additional constraint:
Flow conservation constraint: The total net flow entering a node is zero for all nodes in the network except the source and the sink , that is: for all . In other words, for all nodes in the network except the source and the sink , the total sum of the incoming flow of a node is equal to its outgoing flow (i.e.
$$
\sum_{(u,v) \in E} f(u,v) = \sum_{(v,z) \in E} f(v,z)
$$
, for each vertex ).
The value of a feasible flow for a network, is the net flow into the sink of the flow network, that is: . Note, the flow value in a network is also equal to the total outgoing flow of source , that is: . Also, if we define as a set of nodes in such that and , the flow value is equal to the total net flow going out of A (i.e. ). The flow value in a network is the total amount of flow from to .
## Concepts useful to flow problems
### Flow decomposition
Flow decomposition is a process of breaking down a given flow into a collection of path flows and cycle flows.
|
https://en.wikipedia.org/wiki/Flow_network
|
## Concepts useful to flow problems
### Flow decomposition
Flow decomposition is a process of breaking down a given flow into a collection of path flows and cycle flows. Every flow through a network can be decomposed into one or more paths and corresponding quantities, such that each edge in the flow equals the sum of all quantities of paths that pass through it. Flow decomposition is a powerful tool in optimization problems to maximize or minimize specific flow parameters.
### Adding arcs and flows
We do not use multiple arcs within a network because we can combine those arcs into a single arc. To combine two arcs into a single arc, we add their capacities and their flow values, and assign those to the new arc:
- Given any two nodes and , having two arcs from to with capacities and respectively is equivalent to considering only a single arc from to with a capacity equal to .
- Given any two nodes and , having two arcs from to with pseudo-flows and respectively is equivalent to considering only a single arc from to with a pseudo-flow equal to .
Along with the other constraints, the skew symmetry constraint must be remembered during this step to maintain the direction of the original pseudo-flow arc. Adding flow to an arc is the same as adding an arc with the capacity of zero.
|
https://en.wikipedia.org/wiki/Flow_network
|
Along with the other constraints, the skew symmetry constraint must be remembered during this step to maintain the direction of the original pseudo-flow arc. Adding flow to an arc is the same as adding an arc with the capacity of zero.
### Residuals
The residual capacity of an arc with respect to a pseudo-flow is denoted , and it is the difference between the arc's capacity and its flow. That is, . From this we can construct a residual network, denoted , with a capacity function which models the amount of available capacity on the set of arcs in . More specifically, capacity function of each arc in the residual network represents the amount of flow which can be transferred from to given the current state of the flow within the network.
This concept is used in Ford–Fulkerson algorithm which computes the maximum flow in a flow network.
Note that there can be an unsaturated path (a path with available capacity) from to in the residual network, even though there is no such path from to in the original network. Since flows in opposite directions cancel out, decreasing the flow from to is the same as increasing the flow from to .
### Augmenting paths
An augmenting path is a path in the residual network, where , , and . More simply, an augmenting path is an available flow path from the source to the sink.
|
https://en.wikipedia.org/wiki/Flow_network
|
An augmenting path is a path in the residual network, where , , and . More simply, an augmenting path is an available flow path from the source to the sink. A network is at maximum flow if and only if there is no augmenting path in the residual network .
The bottleneck is the minimum residual capacity of all the edges in a given augmenting path. See example explained in the "
## Example
" section of this article. The flow network is at maximum flow if and only if it has a bottleneck with a value equal to zero. If any augmenting path exists, its bottleneck weight will be greater than 0. In other words, if there is a bottleneck value greater than 0, then there is an augmenting path from the source to the sink. However, we know that if there is any augmenting path, then the network is not at maximum flow, which in turn means that, if there is a bottleneck value greater than 0, then the network is not at maximum flow.
The term "augmenting the flow" for an augmenting path means updating the flow of each arc in this augmenting path to equal the capacity of the bottleneck. Augmenting the flow corresponds to pushing additional flow along the augmenting path until there is no remaining available residual capacity in the bottleneck.
### Multiple sources and/or sinks
|
https://en.wikipedia.org/wiki/Flow_network
|
Augmenting the flow corresponds to pushing additional flow along the augmenting path until there is no remaining available residual capacity in the bottleneck.
### Multiple sources and/or sinks
Sometimes, when modeling a network with more than one source, a supersource is introduced to the graph. This consists of a vertex connected to each of the sources with edges of infinite capacity, so as to act as a global source. A similar construct for sinks is called a supersink.
Example
In Figure 1 you see a flow network with source labeled , sink , and four additional nodes. The flow and capacity is denoted
$$
f/c
$$
. Notice how the network upholds the capacity constraint and flow conservation constraint. The total amount of flow from to is 5, which can be easily seen from the fact that the total outgoing flow from is 5, which is also the incoming flow to . By the skew symmetry constraint, from to is -2 because the flow from to is 2.
In Figure 2 you see the residual network for the same given flow. Notice how there is positive residual capacity on some edges where the original capacity is zero in Figure 1, for example for the edge
$$
(d,c)
$$
. This network is not at maximum flow.
|
https://en.wikipedia.org/wiki/Flow_network
|
Notice how there is positive residual capacity on some edges where the original capacity is zero in Figure 1, for example for the edge
$$
(d,c)
$$
. This network is not at maximum flow. There is available capacity along the paths
$$
(s,a,c,t)
$$
,
$$
(s,a,b,d,t)
$$
and
$$
(s,a,b,d,c,t)
$$
, which are then the augmenting paths.
The bottleneck of the
$$
(s,a,c,t)
$$
path is equal to
$$
\min(c(s,a)-f(s,a), c(a,c)-f(a,c), c(c,t)-f(c,t))
$$
$$
=\min(c_f(s,a), c_f(a,c), c_f(c,t))
$$
$$
= \min(5-3, 3-2, 2-1)
$$
$$
# \min(2, 1, 1)
1
$$
.
## Applications
Picture a series of water pipes, fitting into a network. Each pipe is of a certain diameter, so it can only maintain a flow of a certain amount of water.
|
https://en.wikipedia.org/wiki/Flow_network
|
## Applications
Picture a series of water pipes, fitting into a network. Each pipe is of a certain diameter, so it can only maintain a flow of a certain amount of water. Anywhere that pipes meet, the total amount of water coming into that junction must be equal to the amount going out, otherwise we would quickly run out of water, or we would have a buildup of water. We have a water inlet, which is the source, and an outlet, the sink. A flow would then be one possible way for water to get from source to sink so that the total amount of water coming out of the outlet is consistent. Intuitively, the total flow of a network is the rate at which water comes out of the outlet.
Flows can pertain to people or material over transportation networks, or to electricity over electrical distribution systems. For any such physical network, the flow coming into any intermediate node needs to equal the flow going out of that node. This conservation constraint is equivalent to Kirchhoff's current law.
Flow networks also find applications in ecology: flow networks arise naturally when considering the flow of nutrients and energy between different organisms in a food web. The mathematical problems associated with such networks are quite different from those that arise in networks of fluid or traffic flow.
|
https://en.wikipedia.org/wiki/Flow_network
|
Flow networks also find applications in ecology: flow networks arise naturally when considering the flow of nutrients and energy between different organisms in a food web. The mathematical problems associated with such networks are quite different from those that arise in networks of fluid or traffic flow. The field of ecosystem network analysis, developed by Robert Ulanowicz and others, involves using concepts from information theory and thermodynamics to study the evolution of these networks over time.
## Classifying flow problems
The simplest and most common problem using flow networks is to find what is called the maximum flow, which provides the largest possible total flow from the source to the sink in a given graph. There are many other problems which can be solved using max flow algorithms, if they are appropriately modeled as flow networks, such as bipartite matching, the assignment problem and the transportation problem. Maximum flow problems can be solved in polynomial time with various algorithms (see table). The max-flow min-cut theorem states that finding a maximal network flow is equivalent to finding a cut of minimum capacity that separates the source and the sink, where a cut is the division of vertices such that the source is in one division and the sink is in another.
+ Well-known algorithms for the Maximum Flow Problem Inventor(s) Year Timecomplexity(with nodesand arcs)
|
https://en.wikipedia.org/wiki/Flow_network
|
+ Well-known algorithms for the Maximum Flow Problem Inventor(s) Year Timecomplexity(with nodesand arcs) Dinic's algorithm 1970 Edmonds–Karp algorithm 1972 MPM (Malhotra, Pramodh-Kumar, and Maheshwari)algorithm
1978 Push–relabel algorithm (Goldberg & Tarjan) 1988 James B. Orlin 2013 Li Chen, Rasmus Kyng, Yang P. Liu,
Richard Peng, Maximilian Probst Gutenberg,
Sushant Sachdeva2022
In a multi-commodity flow problem, you have multiple sources and sinks, and various "commodities" which are to flow from a given source to a given sink. This could be for example various goods that are produced at various factories, and are to be delivered to various given customers through the same transportation network.
In a minimum cost flow problem, each edge
$$
u,v
$$
has a given cost
$$
k(u,v)
$$
, and the cost of sending the flow
$$
f(u,v)
$$
across the edge is
$$
f(u,v) \cdot k(u,v)
$$
. The objective is to send a given amount of flow from the source to the sink, at the lowest possible price.
|
https://en.wikipedia.org/wiki/Flow_network
|
In a minimum cost flow problem, each edge
$$
u,v
$$
has a given cost
$$
k(u,v)
$$
, and the cost of sending the flow
$$
f(u,v)
$$
across the edge is
$$
f(u,v) \cdot k(u,v)
$$
. The objective is to send a given amount of flow from the source to the sink, at the lowest possible price.
In a circulation problem, you have a lower bound
$$
\ell(u,v)
$$
on the edges, in addition to the upper bound
$$
c(u,v)
$$
. Each edge also has a cost. Often, flow conservation holds for all nodes in a circulation problem, and there is a connection from the sink back to the source. In this way, you can dictate the total flow with
$$
\ell(t,s)
$$
and
$$
c(t,s)
$$
. The flow circulates through the network, hence the name of the problem.
In a network with gains or generalized network each edge has a gain, a real number (not zero) such that, if the edge has gain g, and an amount x flows into the edge at its tail, then an amount gx flows out at the head.
In a source localization problem, an algorithm tries to identify the most likely source node of information diffusion through a partially observed network.
|
https://en.wikipedia.org/wiki/Flow_network
|
In a network with gains or generalized network each edge has a gain, a real number (not zero) such that, if the edge has gain g, and an amount x flows into the edge at its tail, then an amount gx flows out at the head.
In a source localization problem, an algorithm tries to identify the most likely source node of information diffusion through a partially observed network. This can be done in linear time for trees and cubic time for arbitrary networks and has applications ranging from tracking mobile phone users to identifying the originating source of disease outbreaks.
|
https://en.wikipedia.org/wiki/Flow_network
|
In quantum field theory, partition functions are generating functionals for correlation functions, making them key objects of study in the path integral formalism. They are the imaginary time versions of statistical mechanics partition functions, giving rise to a close connection between these two areas of physics. Partition functions can rarely be solved for exactly, although free theories do admit such solutions. Instead, a perturbative approach is usually implemented, this being equivalent to summing over Feynman diagrams.
## Generating functional
### Scalar theories
In a
$$
d
$$
-dimensional field theory with a real scalar field
$$
\phi
$$
and action
$$
S[\phi]
$$
, the partition function is defined in the path integral formalism as the functional
$$
Z[J] = \int \mathcal D\phi \ e^{iS[\phi] + i \int d^dx J(x)\phi(x)}
$$
where
$$
J(x)
$$
is a fictitious source current.
|
https://en.wikipedia.org/wiki/Partition_function_%28quantum_field_theory%29
|
## Generating functional
### Scalar theories
In a
$$
d
$$
-dimensional field theory with a real scalar field
$$
\phi
$$
and action
$$
S[\phi]
$$
, the partition function is defined in the path integral formalism as the functional
$$
Z[J] = \int \mathcal D\phi \ e^{iS[\phi] + i \int d^dx J(x)\phi(x)}
$$
where
$$
J(x)
$$
is a fictitious source current. It acts as a generating functional for arbitrary n-point correlation functions
$$
G_n(x_1, \dots, x_n) = (-1)^n \frac{1}{Z[0]} \frac{\delta^n Z[J]}{\delta J(x_1)\cdots \delta J(x_n)}\bigg|_{J=0}.
$$
The derivatives used here are functional derivatives rather than regular derivatives since they are acting on functionals rather than regular functions.
|
https://en.wikipedia.org/wiki/Partition_function_%28quantum_field_theory%29
|
### Scalar theories
In a
$$
d
$$
-dimensional field theory with a real scalar field
$$
\phi
$$
and action
$$
S[\phi]
$$
, the partition function is defined in the path integral formalism as the functional
$$
Z[J] = \int \mathcal D\phi \ e^{iS[\phi] + i \int d^dx J(x)\phi(x)}
$$
where
$$
J(x)
$$
is a fictitious source current. It acts as a generating functional for arbitrary n-point correlation functions
$$
G_n(x_1, \dots, x_n) = (-1)^n \frac{1}{Z[0]} \frac{\delta^n Z[J]}{\delta J(x_1)\cdots \delta J(x_n)}\bigg|_{J=0}.
$$
The derivatives used here are functional derivatives rather than regular derivatives since they are acting on functionals rather than regular functions. From this it follows that an equivalent expression for the partition function reminiscent to a power series in source currents is given by
$$
Z[J] = \sum_{n\geq 0}\frac{1}{n!}\int \prod^n_{i=1} d^dx_i G(x_1, \dots, x_n) J(x_1)\cdots J(x_n).
$$
In curved spacetimes there is an added subtlety that must be dealt with due to the fact that the initial vacuum state need not be the same as the final vacuum state.
|
https://en.wikipedia.org/wiki/Partition_function_%28quantum_field_theory%29
|
It acts as a generating functional for arbitrary n-point correlation functions
$$
G_n(x_1, \dots, x_n) = (-1)^n \frac{1}{Z[0]} \frac{\delta^n Z[J]}{\delta J(x_1)\cdots \delta J(x_n)}\bigg|_{J=0}.
$$
The derivatives used here are functional derivatives rather than regular derivatives since they are acting on functionals rather than regular functions. From this it follows that an equivalent expression for the partition function reminiscent to a power series in source currents is given by
$$
Z[J] = \sum_{n\geq 0}\frac{1}{n!}\int \prod^n_{i=1} d^dx_i G(x_1, \dots, x_n) J(x_1)\cdots J(x_n).
$$
In curved spacetimes there is an added subtlety that must be dealt with due to the fact that the initial vacuum state need not be the same as the final vacuum state. Partition functions can also be constructed for composite operators in the same way as they are for fundamental fields. Correlation functions of these operators can then be calculated as functional derivatives of these functionals. For example, the partition function for a composite operator _
|
https://en.wikipedia.org/wiki/Partition_function_%28quantum_field_theory%29
|
Correlation functions of these operators can then be calculated as functional derivatives of these functionals. For example, the partition function for a composite operator _ BLOCK7_ is given by
$$
Z_{\mathcal O}[J] = \int \mathcal D \phi e^{iS[\phi]+i\int d^d x J(x) \mathcal O(x)}.
$$
Knowing the partition function completely solves the theory since it allows for the direct calculation of all of its correlation functions. However, there are very few cases where the partition function can be calculated exactly. While free theories do admit exact solutions, interacting theories generally do not. Instead the partition function can be evaluated at weak coupling perturbatively, which amounts to regular perturbation theory using Feynman diagrams with
$$
J
$$
insertions on the external legs. The symmetry factors for these types of diagrams differ from those of correlation functions since all external legs have identical
$$
J
$$
insertions that can be interchanged, whereas the external legs of correlation functions are all fixed at specific coordinates and are therefore fixed.
|
https://en.wikipedia.org/wiki/Partition_function_%28quantum_field_theory%29
|
Instead the partition function can be evaluated at weak coupling perturbatively, which amounts to regular perturbation theory using Feynman diagrams with
$$
J
$$
insertions on the external legs. The symmetry factors for these types of diagrams differ from those of correlation functions since all external legs have identical
$$
J
$$
insertions that can be interchanged, whereas the external legs of correlation functions are all fixed at specific coordinates and are therefore fixed.
By performing a Wick transformation, the partition function can be expressed in Euclidean spacetime as
$$
Z[J] = \int \mathcal D\phi \ e^{-(S_E[\phi] + \int d^d x_E J\phi)},
$$
where
$$
S_E
$$
is the Euclidean action and
$$
x_E
$$
are Euclidean coordinates. This form is closely connected to the partition function in statistical mechanics, especially since the Euclidean Lagrangian is usually bounded from below in which case it can be interpreted as an energy density. It also allows for the interpretation of the exponential factor as a statistical weight for the field configurations, with larger fluctuations in the gradient or field values leading to greater suppression. This connection with statistical mechanics also lends additional intuition for how correlation functions should behave in a quantum field theory.
|
https://en.wikipedia.org/wiki/Partition_function_%28quantum_field_theory%29
|
It also allows for the interpretation of the exponential factor as a statistical weight for the field configurations, with larger fluctuations in the gradient or field values leading to greater suppression. This connection with statistical mechanics also lends additional intuition for how correlation functions should behave in a quantum field theory.
### General theories
Most of the same principles of the scalar case hold for more general theories with additional fields. Each field requires the introduction of its own fictitious current, with antiparticle fields requiring their own separate currents. Acting on the partition function with a derivative of a current brings down its associated field from the exponential, allowing for the construction of arbitrary correlation functions. After differentiation, the currents are set to zero when correlation functions in a vacuum state are desired, but the currents can also be set to take on particular values to yield correlation functions in non-vanishing background fields.
For partition functions with Grassmann valued fermion fields, the sources are also Grassmann valued. For example, a theory with a single Dirac fermion _ BLOCK0_ requires the introduction of two Grassmann currents
$$
\eta
$$
and
$$
\bar \eta
$$
|
https://en.wikipedia.org/wiki/Partition_function_%28quantum_field_theory%29
|
For example, a theory with a single Dirac fermion _ BLOCK0_ requires the introduction of two Grassmann currents
$$
\eta
$$
and
$$
\bar \eta
$$
so that the partition function is
$$
Z[\bar \eta, \eta] = \int \mathcal D \bar \psi \mathcal D \psi \ e^{iS[\psi, \bar \psi] + i\int d^d x (\bar \eta \psi + \bar \psi \eta)}.
$$
Functional derivatives with respect to
$$
\bar \eta
$$
give fermion fields while derivatives with respect to
$$
\eta
$$
give anti-fermion fields in the correlation functions.
### Thermal field theories
A thermal field theory at temperature _ BLOCK0_ is equivalent in Euclidean formalism to a theory with a compactified temporal direction of length
$$
\beta = 1/T
$$
.
|
https://en.wikipedia.org/wiki/Partition_function_%28quantum_field_theory%29
|
### Thermal field theories
A thermal field theory at temperature _ BLOCK0_ is equivalent in Euclidean formalism to a theory with a compactified temporal direction of length
$$
\beta = 1/T
$$
. Partition functions must be modified appropriately by imposing periodicity conditions on the fields and the Euclidean spacetime integrals
$$
Z[\beta,J] = \int \mathcal D\phi e^{-S_{E,\beta}[\phi]+\int_\beta d^d x_E J \phi}\bigg|_{\phi(\boldsymbol x, 0) = \phi(\boldsymbol x, \beta)}.
$$
This partition function can be taken as the definition of the thermal field theory in imaginary time formalism. Correlation functions are acquired from the partition function through the usual functional derivatives with respect to currents
$$
G_{n,\beta}(x_1, \dots, x_n) = \frac{\delta^n Z[\beta, J]}{\delta J(x_1)\cdots \delta J(x_n)}\bigg|_{J=0}.
$$
## Free theories
The partition function can be solved exactly in free theories by completing the square in terms of the fields.
|
https://en.wikipedia.org/wiki/Partition_function_%28quantum_field_theory%29
|
Correlation functions are acquired from the partition function through the usual functional derivatives with respect to currents
$$
G_{n,\beta}(x_1, \dots, x_n) = \frac{\delta^n Z[\beta, J]}{\delta J(x_1)\cdots \delta J(x_n)}\bigg|_{J=0}.
$$
## Free theories
The partition function can be solved exactly in free theories by completing the square in terms of the fields. Since a shift by a constant does not affect the path integral measure, this allows for separating the partition function into a constant of proportionality
$$
N
$$
arising from the path integral, and a second term that only depends on the current.
|
https://en.wikipedia.org/wiki/Partition_function_%28quantum_field_theory%29
|
## Free theories
The partition function can be solved exactly in free theories by completing the square in terms of the fields. Since a shift by a constant does not affect the path integral measure, this allows for separating the partition function into a constant of proportionality
$$
N
$$
arising from the path integral, and a second term that only depends on the current. For the scalar theory this yields
$$
Z_0[J] = N \exp\bigg(-\frac{1}{2}\int d^d x d^d y \ J(x)\Delta_F(x-y)J(y)\bigg),
$$
where
$$
\Delta_F(x-y)
$$
is the position space Feynman propagator
$$
\Delta_F(x-y) = \int \frac{d^d p}{(2\pi)^d}\frac{i}{p^2-m^2+i\epsilon}e^{-ip\cdot (x-y)}.
$$
This partition function fully determines the free field theory.
|
https://en.wikipedia.org/wiki/Partition_function_%28quantum_field_theory%29
|
Since a shift by a constant does not affect the path integral measure, this allows for separating the partition function into a constant of proportionality
$$
N
$$
arising from the path integral, and a second term that only depends on the current. For the scalar theory this yields
$$
Z_0[J] = N \exp\bigg(-\frac{1}{2}\int d^d x d^d y \ J(x)\Delta_F(x-y)J(y)\bigg),
$$
where
$$
\Delta_F(x-y)
$$
is the position space Feynman propagator
$$
\Delta_F(x-y) = \int \frac{d^d p}{(2\pi)^d}\frac{i}{p^2-m^2+i\epsilon}e^{-ip\cdot (x-y)}.
$$
This partition function fully determines the free field theory.
In the case of a theory with a single free Dirac fermion, completing the square yields a partition function of the form
$$
Z_0[\bar \eta, \eta] = N \exp\bigg(\int d^d x d^d y \ \bar \eta(y) \Delta_D(x-y) \eta(x)\bigg),
$$
where
$$
\Delta_D(x-y)
$$
is the position space Dirac propagator
$$
\Delta_D(x-y) = \int \frac{d^d p}{(2\pi)^d}\frac{i({p\!\!\!/}+m)}{p^2-m^2+i\epsilon}e^{-ip\cdot(x-y)}.
$$
|
https://en.wikipedia.org/wiki/Partition_function_%28quantum_field_theory%29
|
For the scalar theory this yields
$$
Z_0[J] = N \exp\bigg(-\frac{1}{2}\int d^d x d^d y \ J(x)\Delta_F(x-y)J(y)\bigg),
$$
where
$$
\Delta_F(x-y)
$$
is the position space Feynman propagator
$$
\Delta_F(x-y) = \int \frac{d^d p}{(2\pi)^d}\frac{i}{p^2-m^2+i\epsilon}e^{-ip\cdot (x-y)}.
$$
This partition function fully determines the free field theory.
In the case of a theory with a single free Dirac fermion, completing the square yields a partition function of the form
$$
Z_0[\bar \eta, \eta] = N \exp\bigg(\int d^d x d^d y \ \bar \eta(y) \Delta_D(x-y) \eta(x)\bigg),
$$
where
$$
\Delta_D(x-y)
$$
is the position space Dirac propagator
$$
\Delta_D(x-y) = \int \frac{d^d p}{(2\pi)^d}\frac{i({p\!\!\!/}+m)}{p^2-m^2+i\epsilon}e^{-ip\cdot(x-y)}.
$$
## References
|
https://en.wikipedia.org/wiki/Partition_function_%28quantum_field_theory%29
|
In the case of a theory with a single free Dirac fermion, completing the square yields a partition function of the form
$$
Z_0[\bar \eta, \eta] = N \exp\bigg(\int d^d x d^d y \ \bar \eta(y) \Delta_D(x-y) \eta(x)\bigg),
$$
where
$$
\Delta_D(x-y)
$$
is the position space Dirac propagator
$$
\Delta_D(x-y) = \int \frac{d^d p}{(2\pi)^d}\frac{i({p\!\!\!/}+m)}{p^2-m^2+i\epsilon}e^{-ip\cdot(x-y)}.
$$
## References
## Further reading
- Ashok Das, Field Theory: A Path Integral Approach, 2nd edition, World Scientific (Singapore, 2006); paperback .
- Kleinert, Hagen, Path Integrals in Quantum Mechanics, Statistics, Polymer Physics, and Financial Markets, 4th edition, World Scientific (Singapore, 2004); paperback (also available online: PDF-files).
- Jean Zinn-Justin (2009), Scholarpedia, 4(2): 8674.
Category:Quantum field theory
|
https://en.wikipedia.org/wiki/Partition_function_%28quantum_field_theory%29
|
Revision Control System (RCS) is an early implementation of a version control system (VCS). It is a set of UNIX commands that allow multiple users to develop and maintain program code or documents. With RCS, users can make their own revisions of a document, commit changes, and merge them. RCS was originally developed for programs but is also useful for text documents or configuration files that are frequently revised.
## History
### Development
RCS was first released in 1982
by Walter F. Tichy at Purdue University. It was an alternative tool to the then-popular Source Code Control System (SCCS) which was nearly the first version control software tool (developed in 1972 by early Unix developers). RCS is currently maintained by the GNU Project.
An innovation in RCS is the adoption of reverse deltas. Instead of storing every revision in a file like SCCS does with interleaved deltas, RCS stores a set of edit instructions to go back to an earlier version of the file. Tichy claims that it is faster for most cases because the recent revisions are used more often.
### Legal and licensing
Initially (through version 3, which was distributed in 4.3BSD) , its license prohibited redistribution without written permission from Walter Tichy:
A READ_ME file accompanied some versions of RCS which further restricted distribution, e.g., in 4.3BSD-Reno.
|
https://en.wikipedia.org/wiki/Revision_Control_System
|
### Legal and licensing
Initially (through version 3, which was distributed in 4.3BSD) , its license prohibited redistribution without written permission from Walter Tichy:
A READ_ME file accompanied some versions of RCS which further restricted distribution, e.g., in 4.3BSD-Reno.
Ca. 1989, the RCS license was altered to something similar to the contemporary BSD licenses, as seen by comments in the source code.
RCS 4.3, released 26 July 1990, was distributed "under license by the Free Software Foundation", under the terms of the GPL.
OpenBSD provides a different implementation called OpenRCS, which is BSD-licensed.
## Behavior
### Mode of operation
RCS works well with standalone files and supports multi-file projects but, by modern standards, that support is limited: RCS can assemble the versions of multiple files into a single release (via "symbolic names") but it lacks support for atomic commit across those files. Although it provides branching, the version syntax is cumbersome. Instead of using branches, many teams just use the built-in locking mechanism and work on a single head branch.
### Usage
RCS revolves around the usage of "revision groups" or sets of files that have been checked-in via the `co` (checkout) and `ci` (check-in) commands.
|
https://en.wikipedia.org/wiki/Revision_Control_System
|
Instead of using branches, many teams just use the built-in locking mechanism and work on a single head branch.
### Usage
RCS revolves around the usage of "revision groups" or sets of files that have been checked-in via the `co` (checkout) and `ci` (check-in) commands. By default, a checked-in file is removed and replaced with a ",v" file (so foo.rb when checked in becomes foo.rb,v) which can then be checked out by anyone with access to the revision group. RCS files (again, files with the extension ",v") reflect the main file with additional metadata on its first lines. Once checked in, RCS stores revisions in a tree structure that can be followed so that a user can revert a file to a previous form if necessary.
### Advantages
- Simple structure and easy to work with
- Revision saving is not dependent on a central repository
### Disadvantages
- There is little security, in the sense that the version history can be edited by the users.
- Only one user can work on a file at a time.
## Notes
|
https://en.wikipedia.org/wiki/Revision_Control_System
|
In mathematics and statistics, a quantitative variable may be continuous or discrete if it is typically obtained by measuring or counting, respectively. If it can take on two real values and all the values between them, the variable is continuous in that interval. If it can take on a value such that there is a non-infinitesimal gap on each side of it containing no values that the variable can take on, then it is discrete around that value. In some contexts, a variable can be discrete in some ranges of the number line and continuous in others. In statistics, continuous and discrete variables are distinct statistical data types which are described with different probability distributions.
## Continuous variable
A continuous variable is a variable such that there are possible values between any two values.
For example, a variable over a non-empty range of the real numbers is continuous, if it can take on any value in that range.
Methods of calculus are often used in problems in which the variables are continuous, for example in continuous optimization problems.
In statistical theory, the probability distributions of continuous variables can be expressed in terms of probability density functions.
In continuous-time dynamics, the variable time is treated as continuous, and the equation describing the evolution of some variable over time is a differential equation.
|
https://en.wikipedia.org/wiki/Continuous_or_discrete_variable%23Discrete_variable
|
In statistical theory, the probability distributions of continuous variables can be expressed in terms of probability density functions.
In continuous-time dynamics, the variable time is treated as continuous, and the equation describing the evolution of some variable over time is a differential equation. The instantaneous rate of change is a well-defined concept that takes the ratio of the change in the dependent variable to the independent variable at a specific instant.
## Discrete variable
In contrast, a variable is a discrete variable if and only if there exists a one-to-one correspondence between this variable and a subset of
$$
\mathbb{N}
$$
, the set of natural numbers. In other words, a discrete variable over a particular interval of real values is one for which, for any value in the range that the variable is permitted to take on, there is a positive minimum distance to the nearest other permissible value. The value of a discrete variable can be obtained by counting, and the number of permitted values is either finite or countably infinite. Common examples are variables that must be integers, non-negative integers, positive integers, or only the integers 0 and 1.
Methods of calculus do not readily lend themselves to problems involving discrete variables. Especially in multivariable calculus, many models rely on the assumption of continuity.
|
https://en.wikipedia.org/wiki/Continuous_or_discrete_variable%23Discrete_variable
|
Common examples are variables that must be integers, non-negative integers, positive integers, or only the integers 0 and 1.
Methods of calculus do not readily lend themselves to problems involving discrete variables. Especially in multivariable calculus, many models rely on the assumption of continuity. Examples of problems involving discrete variables include integer programming.
In statistics, the probability distributions of discrete variables can be expressed in terms of probability mass functions.
In discrete time dynamics, the variable time is treated as discrete, and the equation of evolution of some variable over time is called a difference equation. For certain discrete-time dynamical systems, the system response can be modelled by solving the difference equation for an analytical solution.
In econometrics and more generally in regression analysis, sometimes some of the variables being empirically related to each other are 0-1 variables, being permitted to take on only those two values. The purpose of the discrete values of 0 and 1 is to use the dummy variable as a ‘switch’ that can ‘turn on’ and ‘turn off’ by assigning the two values to different parameters in an equation. A variable of this type is called a dummy variable. If the dependent variable is a dummy variable, then logistic regression or probit regression is commonly employed.
|
https://en.wikipedia.org/wiki/Continuous_or_discrete_variable%23Discrete_variable
|
A variable of this type is called a dummy variable. If the dependent variable is a dummy variable, then logistic regression or probit regression is commonly employed. In the case of regression analysis, a dummy variable can be used to represent subgroups of the sample in a study (e.g. the value 0 corresponding to a constituent of the control group).
## Mixture of continuous and discrete variables
A mixed multivariate model can contain both discrete and continuous variables. For instance, a simple mixed multivariate model could have a discrete variable
$$
x
$$
, which only takes on values 0 or 1, and a continuous variable
$$
y
$$
. An example of a mixed model could be a research study on the risk of psychological disorders based on one binary measure of psychiatric symptoms and one continuous measure of cognitive performance. Mixed models may also involve a single variable that is discrete over some range of the number line and continuous at another range.
In probability theory and statistics, the probability distribution of a mixed random variable consists of both discrete and continuous components. A mixed random variable does not have a cumulative distribution function that is discrete or everywhere-continuous. An example of a mixed type random variable is the probability of wait time in a queue.
|
https://en.wikipedia.org/wiki/Continuous_or_discrete_variable%23Discrete_variable
|
A mixed random variable does not have a cumulative distribution function that is discrete or everywhere-continuous. An example of a mixed type random variable is the probability of wait time in a queue. The likelihood of a customer experiencing a zero wait time is discrete, while non-zero wait times are evaluated on a continuous time scale. In physics (particularly quantum mechanics, where this sort of distribution often arises), dirac delta functions are often used to treat continuous and discrete components in a unified manner. For example, the previous example might be described by a probability density
$$
p(t)=\alpha \delta (t) + g(t)
$$
, such that
$$
P(t>0)=\int_0^\infty g(t)=1-\alpha
$$
, and
$$
P(t=0)=\alpha
$$
.
|
https://en.wikipedia.org/wiki/Continuous_or_discrete_variable%23Discrete_variable
|
Turing's proof is a proof by Alan Turing, first published in November 1936 with the title "On Computable Numbers, with an Application to the ". It was the second proof (after Church's theorem) of the negation of Hilbert's ; that is, the conjecture that some purely mathematical yes–no questions can never be answered by computation; more technically, that some decision problems are "undecidable" in the sense that there is no single algorithm that infallibly gives a correct "yes" or "no" answer to each instance of the problem. In Turing's own words:
"what I shall prove is quite different from the well-known results of Gödel ... I shall now show that there is no general method which tells whether a given formula U is provable in K [Principia Mathematica]".
Turing followed this proof with two others. The second and third both rely on the first. All rely on his development of typewriter-like "computing machines" that obey a simple set of rules and his subsequent development of a "universal computing machine". As per UK copyright law, the work entered the public domain on 1 January 2025, 70 full calendar years after Turing's death on 7 June 1954.
## Summary of the proofs
In his proof that the Entscheidungsproblem can have no solution, Turing proceeded from two proofs that were to lead to his final proof.
|
https://en.wikipedia.org/wiki/Turing%27s_proof
|
As per UK copyright law, the work entered the public domain on 1 January 2025, 70 full calendar years after Turing's death on 7 June 1954.
## Summary of the proofs
In his proof that the Entscheidungsproblem can have no solution, Turing proceeded from two proofs that were to lead to his final proof. His first theorem is most relevant to the halting problem, the second is more relevant to Rice's theorem.
First proof: that no "computing machine" exists that can decide whether or not an arbitrary "computing machine" (as represented by an integer 1, 2, 3, . . .) is "circle-free" (i.e. goes on printing its number in binary ad infinitum): "...we have no general process for doing this in a finite number of steps" (p. 132, ibid.). Turing's proof, although it seems to use the "diagonal process", in fact shows that his machine (called H) cannot calculate its own number, let alone the entire diagonal number (Cantor's diagonal argument): "The fallacy in the argument lies in the assumption that B [the diagonal number] is computable" The proof does not require much mathematics.
|
https://en.wikipedia.org/wiki/Turing%27s_proof
|
[the diagonal number] is computable" The proof does not require much mathematics.
Second proof: This one is perhaps more familiar to readers as Rice's theorem: "We can show further that there can be no machine E which, when supplied with the S.D ["program"] of an arbitrary machine M, will determine whether M ever prints a given symbol (0 say)"
Third proof: "Corresponding to each computing machine M we construct a formula Un(M) and we show that, if there is a general method for determining whether Un(M) is provable, then there is a general method for determining whether M ever prints 0".
The third proof requires the use of formal logic to prove a first lemma, followed by a brief word-proof of the second:
Finally, in only 64 words and symbols Turing proves by reductio ad absurdum that "the Hilbert Entscheidungsproblem can have no solution".
### Summary of the first proof
Turing created a thicket of abbreviations. See the glossary at the end of the article for definitions.
Some key clarifications:
Turing spent much of his paper actually "constructing" his machines to convince us of their truth. This was required by his use of the reductio ad absurdum form of proof. We must emphasize the "constructive" nature of this proof. Turing describes what could be a real machine, really buildable.
|
https://en.wikipedia.org/wiki/Turing%27s_proof
|
We must emphasize the "constructive" nature of this proof. Turing describes what could be a real machine, really buildable. The only questionable element is the existence of machine D, which this proof will eventually show to be impossible.
Turing begins the proof with the assertion of the existence of a “decision/determination” machine D. When fed any S.D (string of symbols A, C, D, L, R, N, semicolon “;”) it will determine if this S.D (symbol string) represents a "computing machine" that is either "circular" — and therefore "un-satisfactory u" — or "circle-free" — and therefore "satisfactory s".
Turing makes no comment about how machine D goes about its work. For sake of argument, we suppose that D would first look to see if the string of symbols is "well-formed" (i.e. in the form of an algorithm and not just a scramble of symbols), and if not then discard it. Then it would go “circle-hunting”. To do this perhaps it would use “heuristics” (tricks: taught or learned). For purposes of the proof, these details are not important.
|
https://en.wikipedia.org/wiki/Turing%27s_proof
|
To do this perhaps it would use “heuristics” (tricks: taught or learned). For purposes of the proof, these details are not important.
Turing then describes (rather loosely) the algorithm (method) to be followed by a machine he calls H. Machine H contains within it the decision-machine D (thus D is a “subroutine” of H). Machine H’s algorithm is expressed in H’s table of instructions, or perhaps in H’s Standard Description on tape and united with the universal machine U; Turing does not specify this.
Machine H is responsible for converting any number N into an equivalent S.D symbol string for sub-machine D to test. (In programming parlance: H passes an arbitrary "S.D” to D, and D returns “satisfactory” or “unsatisfactory”.) Machine H is also responsible for keeping a tally R (“Record”?) of successful numbers (we suppose that the number of “successful” S.D's, i.e. R, is much less than the number of S.D's tested, i.e. N). Finally, H prints on a section of its tape a diagonal number “beta-primed” B’.
|
https://en.wikipedia.org/wiki/Turing%27s_proof
|
Machine H is also responsible for keeping a tally R (“Record”?) of successful numbers (we suppose that the number of “successful” S.D's, i.e. R, is much less than the number of S.D's tested, i.e. N). Finally, H prints on a section of its tape a diagonal number “beta-primed” B’. H creates this B’ by “simulating” (in the computer-sense) the “motions” of each “satisfactory” machine/number; eventually this machine/number under test will arrive at its Rth “figure” (1 or 0), and H will print it. H then is responsible for “cleaning up the mess” left by the simulation, incrementing N and proceeding onward with its tests, ad infinitum.
Note: All these machines that H is hunting for are what Turing called "computing machines". These compute binary-decimal-numbers in an endless stream of what Turing called "figures": only the symbols 1 and 0.
### An example to illustrate the first proof
An example : Suppose machine H has tested 13472 numbers and produced 5 satisfactory numbers, i.e. H has converted the numbers 1 through 13472 into S.D's (symbol strings) and passed them to D for test.
|
https://en.wikipedia.org/wiki/Turing%27s_proof
|
### An example to illustrate the first proof
An example : Suppose machine H has tested 13472 numbers and produced 5 satisfactory numbers, i.e. H has converted the numbers 1 through 13472 into S.D's (symbol strings) and passed them to D for test. As a consequence H has tallied 5 satisfactory numbers and run the first one to its 1st "figure", the second to its 2nd figure, the third to its 3rd figure, the fourth to its 4th figure, and the fifth to its 5th figure. The count now stands at N = 13472, R = 5, and B' = ".10011" (for example). H cleans up the mess on its tape, and proceeds:
H increments N = 13473 and converts "13473" to symbol string ADRLD. If sub-machine D deems ADLRD unsatisfactory, then H leaves the tally-record R at 5. H will increment the number N to 13474 and proceed onward. On the other hand, if D deems ADRLD satisfactory then H will increment R to 6. H will convert N (again) into ADLRD [this is just an example, ADLRD is probably useless] and “run” it using the universal machine U until this machine-under-test (U "running" ADRLD) prints its 6th “figure” i.e. 1 or 0.
|
https://en.wikipedia.org/wiki/Turing%27s_proof
|
H will convert N (again) into ADLRD [this is just an example, ADLRD is probably useless] and “run” it using the universal machine U until this machine-under-test (U "running" ADRLD) prints its 6th “figure” i.e. 1 or 0. H will print this 6th number (e.g. “0”) in the “output” region of its tape (e.g. B’ = “.100110”).
H cleans up the mess, and then increments the number N to 13474.
The whole process unravels when H arrives at its own number K. We will proceed with our example. Suppose the successful-tally/record R stands at 12. H finally arrives at its own number minus 1, i.e. N = K-1 = 4335...3214, and this number is unsuccessful. Then H increments N to produce K = 4355...3215, i.e. its own number. H converts this to “LDDR...DCAR” and passes it to decision-machine D. Decision-machine D must return “satisfactory” (that is: H must by definition go on and on testing, ad infinitum, because it is "circle-free").
|
https://en.wikipedia.org/wiki/Turing%27s_proof
|
Then H increments N to produce K = 4355...3215, i.e. its own number. H converts this to “LDDR...DCAR” and passes it to decision-machine D. Decision-machine D must return “satisfactory” (that is: H must by definition go on and on testing, ad infinitum, because it is "circle-free"). So H now increments tally R from 12 to 13 and then re-converts the number-under-test K into its S.D and uses U to simulate it. But this means that H will be simulating its own motions. What is the first thing the simulation will do? This simulation K-aka-H either creates a new N or “resets” the “old” N to 1. This "K-aka-H" either creates a new R or “resets” the “old” R to 0. Old-H “runs” new "K-aka-H" until it arrives at its 12th figure.
But it never makes it to the 13th figure; K-aka-H eventually arrives at 4355...3215, again, and K-aka-H must repeat the test. K-aka-H will never reach the 13th figure. The H-machine probably just prints copies of itself ad infinitum across blank tape.
|
https://en.wikipedia.org/wiki/Turing%27s_proof
|
K-aka-H will never reach the 13th figure. The H-machine probably just prints copies of itself ad infinitum across blank tape. But this contradicts the premise that H is a satisfactory, non-circular computing machine that goes on printing the diagonal numbers's 1's and 0's forever. (We will see the same thing if N is reset to 1 and R is reset to 0.)
If the reader does not believe this, they can write a "stub" for decision-machine D (stub "D" will return "satisfactory") and then see for themselves what happens at the instant machine H encounters its own number.
### Summary of the second proof
Less than one page long, the passage from premises to conclusion is obscure.
Turing proceeds by reductio ad absurdum. He asserts the existence of a machine E, which when given the S.D (Standard Description, i.e. "program") of an arbitrary machine M, will determine whether M ever prints a given symbol (0 say). He does not assert that this M is a "computing machine".
Given the existence of machine E, Turing proceeds as follows:
1. If machine E exists then a machine G exists that determines if M prints 0 infinitely often, AND
1.
|
https://en.wikipedia.org/wiki/Turing%27s_proof
|
Given the existence of machine E, Turing proceeds as follows:
1. If machine E exists then a machine G exists that determines if M prints 0 infinitely often, AND
1. If E exists then another process exists [we can call the process/machine G' for reference] that determines if M prints 1 infinitely often, THEREFORE
1. When we combine G with G' we have a process that determines if M prints an infinity of figures, AND
1. IF the process "G with G'" determines M prints an infinity of figures, THEN "G with G'" has determined that M is circle-free, BUT
1. This process "G with G'" that determine if M is circle-free, by proof 1, cannot exist, THEREFORE
1. Machine E does not exist.
### Details of second proof
The difficulty in the proof is step 1. The reader will be helped by realizing that Turing is not explaining his subtle handiwork. (In a nutshell: he is using certain equivalencies between the “existential-“ and “universal-operators” together with their equivalent expressions written with logical operators.)
Here's an example: Suppose we see before us a parking lot full of hundreds of cars. We decide to go around the entire lot looking for: “Cars with flat (bad) tires”.
|
https://en.wikipedia.org/wiki/Turing%27s_proof
|
Here's an example: Suppose we see before us a parking lot full of hundreds of cars. We decide to go around the entire lot looking for: “Cars with flat (bad) tires”. After an hour or so we have found two “cars with bad tires.” We can now say with certainty that “Some cars have bad tires”. Or we could say: “It’s not true that ‘All the cars have good tires’”. Or: “It is true that: ‘not all the cars have good tires”. Let us go to another lot. Here we discover that “All the cars have good tires.” We might say, “There’s not a single instance of a car having a bad tire.” Thus we see that, if we can say something about each car separately then we can say something about ALL of them collectively.
This is what Turing does:
From M he creates a collection of machines {M1, M2, M3, M4, ..., Mn} and about each he writes a sentence: “X prints at least one 0” and allows only two “truth values”, True = blank or False = :0:. One by one he determines the truth value of the sentence for each machine and makes a string of blanks or :0:, or some combination of these.
|
https://en.wikipedia.org/wiki/Turing%27s_proof
|
This is what Turing does:
From M he creates a collection of machines {M1, M2, M3, M4, ..., Mn} and about each he writes a sentence: “X prints at least one 0” and allows only two “truth values”, True = blank or False = :0:. One by one he determines the truth value of the sentence for each machine and makes a string of blanks or :0:, or some combination of these. We might get something like this: “M1 prints a 0” = True AND “M2 prints a 0” = True AND “M3 prints a 0” = True AND “M4 prints a 0” = False, ... AND “Mn prints a 0” = False. He gets the string
if there are an infinite number of machines Mn. If on the other hand if every machine had produced a "True" then the expression on the tape would be
Thus Turing has converted statements about each machine considered separately into a single "statement" (string) about all of them. Given the machine (he calls it G) that created this expression, he can test it with his machine E and determine if it ever produces a 0. In our first example above we see that indeed it does, so we know that not all the M's in our sequence print 0s.
|
https://en.wikipedia.org/wiki/Turing%27s_proof
|
Given the machine (he calls it G) that created this expression, he can test it with his machine E and determine if it ever produces a 0. In our first example above we see that indeed it does, so we know that not all the M's in our sequence print 0s. But the second example shows that, since the string is blanks then every Mn in our sequence has produced a 0.
All that remains for Turing to do is create a process to create the sequence of Mn's from a single M.
Suppose M prints this pattern:
M => ...AB01AB0010AB…
Turing creates another machine F that takes M and crunches out a sequence of Mn's that successively convert the first n 0's to “0-bar” (0):
He states, without showing details, that this machine F is truly build-able. We can see that one of a couple things could happen. F may run out of machines that have 0's, or it may have to go on ad infinitum creating machines to “cancel the zeros”.
Turing now combines machines E and F into a composite machine G. G starts with the original M, then uses F to create all the successor-machines M1, M2,. . ., Mn. Then G uses E to test each machine starting with M. If E detects that a machine never prints a zero, G prints :0: for that machine.
|
https://en.wikipedia.org/wiki/Turing%27s_proof
|
Turing now combines machines E and F into a composite machine G. G starts with the original M, then uses F to create all the successor-machines M1, M2,. . ., Mn. Then G uses E to test each machine starting with M. If E detects that a machine never prints a zero, G prints :0: for that machine. If E detects that a machine does print a 0 (we assume, Turing doesn’t say) then G prints :: or just skips this entry, leaving the squares blank. We can see that a couple things can happen.
Now, what happens when we apply E to G itself?
As we can apply the same process for determining if M prints 1 infinitely often. When we combine these processes, we can determine that M does, or does not, go on printing 1's and 0's ad infinitum. Thus we have a method for determining if M is circle-free. By Proof 1 this is impossible. So the first assertion that E exists, is wrong: E does not exist.
### Summary of the third proof
Here Turing proves "that the Hilbert Entscheidungsproblem can have no solution".
|
https://en.wikipedia.org/wiki/Turing%27s_proof
|
So the first assertion that E exists, is wrong: E does not exist.
### Summary of the third proof
Here Turing proves "that the Hilbert Entscheidungsproblem can have no solution". Here he
Both Lemmas #1 and #2 are required to form the necessary "IF AND ONLY IF" (i.e. logical equivalence) required by the proof:
Turing demonstrates the existence of a formula Un(M) which says, in effect, that "in some complete configuration of M, 0 appears on the tape" (p. 146). This formula is TRUE, that is, it is "constructible", and he shows how to go about this.
Then Turing proves two Lemmas, the first requiring all the hard work. (The second is the converse of the first.) Then he uses reductio ad absurdum to prove his final result:
1. There exists a formula Un(M). This formula is TRUE, AND
1. If the Entscheidungsproblem can be solved THEN a mechanical process exists for determining whether Un(M) is provable (derivable), AND
1. By Lemmas 1 and 2: Un(M) is provable IF AND ONLY IF 0 appears in some "complete configuration" of M, AND
1. IF 0 appears in some "complete configuration" of M THEN a mechanical process exists that will determine whether arbitrary M ever prints 0, AND
1.
|
https://en.wikipedia.org/wiki/Turing%27s_proof
|
IF AND ONLY IF 0 appears in some "complete configuration" of M, AND
1. IF 0 appears in some "complete configuration" of M THEN a mechanical process exists that will determine whether arbitrary M ever prints 0, AND
1. By Proof 2 no mechanical process exists that will determine whether arbitrary M ever prints 0, THEREFORE
1. Un(M) is not provable (it is TRUE, but not provable) which means that the Entscheidungsproblem is unsolvable.
### Details of the third proof
[If readers intend to study the proof in detail they should correct their copies of the pages of the third proof with the corrections that Turing supplied. Readers should also come equipped with a solid background in (i) logic (ii) the paper of Kurt Gödel: "On Formally Undecidable Propositions of Principia Mathematica and Related Systems". For assistance with Gödel's paper they may consult e.g. Ernest Nagel and James R. Newman, Gödel's Proof, New York University Press, 1958.]
To follow the technical details, the reader will need to understand the definition of "provable" and be aware of important "clues".
|
https://en.wikipedia.org/wiki/Turing%27s_proof
|
For assistance with Gödel's paper they may consult e.g. Ernest Nagel and James R. Newman, Gödel's Proof, New York University Press, 1958.]
To follow the technical details, the reader will need to understand the definition of "provable" and be aware of important "clues".
"Provable" means, in the sense of Gödel, that (i) the axiom system itself is powerful enough to produce (express) the sentence "This sentence is provable", and (ii) that in any arbitrary "well-formed" proof the symbols lead by axioms, definitions, and substitution to the symbols of the conclusion.
First clue: "Let us put the description of M into the first standard form of §6". Section 6 describes the very specific "encoding" of machine M on the tape of a "universal machine" U. This requires the reader to know some idiosyncrasies of Turing's universal machine U and the encoding scheme.
(i) The universal machine is a set of "universal" instructions that reside in an "instruction table". Separate from this, on U's tape, a "computing machine" M will reside as "M-code". The universal table of instructions can print on the tape the symbols A, C, D, 0, 1, u, v, w, x, y, z, : .
|
https://en.wikipedia.org/wiki/Turing%27s_proof
|
Separate from this, on U's tape, a "computing machine" M will reside as "M-code". The universal table of instructions can print on the tape the symbols A, C, D, 0, 1, u, v, w, x, y, z, : . The various machines M can print these symbols only indirectly by commanding U to print them.
(ii) The "machine code" of M consists of only a few letters and the semicolon, i.e. D, C, A, R, L, N, ; . Nowhere within the "code" of M will the numerical "figures" (symbols) 1 and 0 ever appear. If M wants U to print a symbol from the collection blank, 0, 1 then it uses one of the following codes to tell U to print them. To make things more confusing, Turing calls these symbols S0, S1, and S2, i.e.
blank = S0 = D
0 = S1 = DC
1 = S2 = DCC
(iii) A "computing machine", whether it is built directly into a table (as his first examples show), or as machine-code M on universal-machine U's tape, prints its number on blank tape (to the right of M-code, if there is one) as 1s and 0s forever proceeding to the right.
|
https://en.wikipedia.org/wiki/Turing%27s_proof
|
To make things more confusing, Turing calls these symbols S0, S1, and S2, i.e.
blank = S0 = D
0 = S1 = DC
1 = S2 = DCC
(iii) A "computing machine", whether it is built directly into a table (as his first examples show), or as machine-code M on universal-machine U's tape, prints its number on blank tape (to the right of M-code, if there is one) as 1s and 0s forever proceeding to the right.
(iv) If a "computing machine" is U+"M-code", then "M-code" appears first on the tape; the tape has a left end and the "M-code" starts there and proceeds to the right on alternate squares. When the M-code comes to an end (and it must, because of the assumption that these M-codes are finite algorithms), the "figures" will begin as 1s and 0s on alternate squares, proceeding to the right forever. Turing uses the (blank) alternate squares (called "E"- "eraseable"- squares) to help U+"M-code" keep track of where the calculations are, both in the M-code and in the "figures" that the machine is printing.
|
https://en.wikipedia.org/wiki/Turing%27s_proof
|
When the M-code comes to an end (and it must, because of the assumption that these M-codes are finite algorithms), the "figures" will begin as 1s and 0s on alternate squares, proceeding to the right forever. Turing uses the (blank) alternate squares (called "E"- "eraseable"- squares) to help U+"M-code" keep track of where the calculations are, both in the M-code and in the "figures" that the machine is printing.
(v) A "complete configuration" is a printing of all symbols on the tape, including M-code and "figures" up to that point, together with the figure currently being scanned (with a pointer-character printed to the left of the scanned symbol?). If we have interpreted Turing's meaning correctly, this will be a hugely long set of symbols. But whether the entire M-code must be repeated is unclear; only a printing of the current M-code instruction is necessary plus the printing of all figures with a figure-marker).
|
https://en.wikipedia.org/wiki/Turing%27s_proof
|
If we have interpreted Turing's meaning correctly, this will be a hugely long set of symbols. But whether the entire M-code must be repeated is unclear; only a printing of the current M-code instruction is necessary plus the printing of all figures with a figure-marker).
(vi) Turing reduced the vast possible number of instructions in "M-code" (again: the code of M to appear on the tape) to a small canonical set, one of three similar to this: {qi Sj Sk R ql} e.g. If machine is executing instruction #qi and symbol Sj is on the square being scanned, then Print symbol Sk and go Right and then go to instruction ql: The other instructions are similar, encoding for "Left" L and "No motion" N. It is this set that is encoded by the string of symbols qi = DA... A, Sj = DC...C, Sk = DC...C, R, ql = DA....A. Each instruction is separated from another one by the semicolon. For example, {q5, S1 S0 L q3} means: Instruction #5: If scanned symbol is 0 then print blank, go Left, then go to instruction #3.
|
https://en.wikipedia.org/wiki/Turing%27s_proof
|
Each instruction is separated from another one by the semicolon. For example, {q5, S1 S0 L q3} means: Instruction #5: If scanned symbol is 0 then print blank, go Left, then go to instruction #3. It is encoded as follows
Second clue: Turing is using ideas introduced in Gödel's paper, that is, the "Gödelization" of (at least part of) the formula for Un(M). This clue appears only as a footnote on page 138 (): "A sequence of r primes is denoted by ^(r)" (ibid.) [Here, r inside parentheses is "raised".] This "sequence of primes" appears in a formula called F^(n).
Third clue: This reinforces the second clue. Turing's original attempt at the proof uses the expression:
Earlier in the paper Turing had previously used this expression (p. 138) and defined N(u) to mean "u is a non-negative integer" (ibid.) (i.e. a Gödel number). But, with the Bernays corrections, Turing abandoned this approach (i.e. the use of N(u)) and the only place where "the Gödel number" appears explicitly is where he uses F^(n).
What does this mean for the proof?
|
https://en.wikipedia.org/wiki/Turing%27s_proof
|
But, with the Bernays corrections, Turing abandoned this approach (i.e. the use of N(u)) and the only place where "the Gödel number" appears explicitly is where he uses F^(n).
What does this mean for the proof? The first clue means that a simple examination of the M-code on the tape will not reveal if a symbol 0 is ever printed by U+"M-code". A testing-machine might look for the appearance of DC in one of the strings of symbols that represent an instruction. But will this instruction ever be "executed?" Something has to "run the code" to find out. This something can be a machine, or it can be lines in a formal proof, i.e. Lemma #1.
The second and third clues mean that, as its foundation is Gödel's paper, the proof is difficult.
In the example below we will actually construct a simple "theorem"—a little Post–Turing machine program "run it". We will see just how mechanical a properly designed theorem can be. A proof, we will see, is just that, a "test" of the theorem that we do by inserting a "proof example" into the beginning and see what pops out at the end.
Both Lemmas #1 and #2 are required to form the necessary "IF AND ONLY IF" (i.e. logical equivalence) required by the proof:
|
https://en.wikipedia.org/wiki/Turing%27s_proof
|
A proof, we will see, is just that, a "test" of the theorem that we do by inserting a "proof example" into the beginning and see what pops out at the end.
Both Lemmas #1 and #2 are required to form the necessary "IF AND ONLY IF" (i.e. logical equivalence) required by the proof:
To quote Franzén:
Franzén has defined "provable" earlier in his book:
Thus a "sentence" is a string of symbols, and a theorem is a string of strings of symbols.
Turing is confronted with the following task:
Thus the "string of sentences" will be strings of strings of symbols. The only allowed individual symbols will come from Gödel's symbols defined in his paper.(In the following example we use the "<" and ">" around a "figure" to indicate that the "figure" is the symbol being scanned by the machine).
### An example to illustrate the third proof
In the following, we have to remind ourselves that every one of Turing's “computing machines” is a binary-number generator/creator that begins work on “blank tape”. Properly constructed, it always cranks away ad infinitum, but its instructions are always finite. In Turing's proofs, Turing's tape had a “left end” but extended right ad infinitum.
|
https://en.wikipedia.org/wiki/Turing%27s_proof
|
Properly constructed, it always cranks away ad infinitum, but its instructions are always finite. In Turing's proofs, Turing's tape had a “left end” but extended right ad infinitum. For sake of example below we will assume that the “machine” is not a Universal machine, but rather the simpler “dedicated machine” with the instructions in the Table.
Our example is based on a modified Post–Turing machine model of a Turing Machine. This model prints only the symbols 0 and 1. The blank tape is considered to be all b's. Our modified model requires us to add two more instructions to the 7 Post–Turing instructions. The abbreviations that we will use are:
In the cases of R, L, E, P0, and P1 after doing its task the machine continues on to the next instruction in numerical sequence; ditto for the jumps if their tests fail.
But, for brevity, our examples will only use three squares. And these will always start as there blanks with the scanned square on the left: i.e. bbb. With two symbols 1, 0 and blank we can have 27 distinct configurations:
We must be careful here, because it is quite possible that an algorithm will (temporarily) leave blanks in between figures, then come back and fill something in. More likely, an algorithm may do this intentionally.
|
https://en.wikipedia.org/wiki/Turing%27s_proof
|
With two symbols 1, 0 and blank we can have 27 distinct configurations:
We must be careful here, because it is quite possible that an algorithm will (temporarily) leave blanks in between figures, then come back and fill something in. More likely, an algorithm may do this intentionally. In fact, Turing's machine does this—it prints on alternate squares, leaving blanks between figures so it can print locator symbols.
Turing always left alternate squares blank so his machine could place a symbol to the left of a figure (or a letter if the machine is the universal machine and the scanned square is actually in the “program”). In our little example we will forego this and just put symbols ( ) around the scanned symbol, as follows:
Let us write a simple program:
Remember that we always start with blank tape. The complete configuration prints the symbols on the tape followed by the next instruction:
Let us add “jump” into the formula. When we do this we discover why the complete configuration must include the tape symbols. (Actually, we see this better, below.) This little program prints three “1”s to the right, reverses direction and moves left printing 0’s until it hits a blank. We will print all the symbols that our machine uses:
Here at the end we find that a blank on the left has “come into play” so we leave it as part of the total configuration.
|
https://en.wikipedia.org/wiki/Turing%27s_proof
|
We will print all the symbols that our machine uses:
Here at the end we find that a blank on the left has “come into play” so we leave it as part of the total configuration.
Given that we have done our job correctly, we add the starting conditions and see “where the theorem goes”. The resulting configuration—the number 110—is the PROOF.
- Turing's first task had to write a generalized expression using logic symbols to express exactly what his Un(M) would do.
- Turing's second task is to "Gödelize" this hugely long string-of-string-of-symbols using Gödel's technique of assigning primes to the symbols and raising the primes to prime-powers, per Gödel's method.
## Complications
Turing's proof is complicated by a large number of definitions, and confounded with what Martin Davis called "petty technical details" and "...technical details [that] are incorrect as given". Turing himself published "A Correction" in 1938: "The author is indebted to P. Bernays for pointing out these errors".
Specifically, in its original form the third proof is badly marred by technical errors. And even after Bernays' suggestions and Turing's corrections, errors remained in the description of the universal machine.
|
https://en.wikipedia.org/wiki/Turing%27s_proof
|
Specifically, in its original form the third proof is badly marred by technical errors. And even after Bernays' suggestions and Turing's corrections, errors remained in the description of the universal machine. And confusingly, since Turing was unable to correct his original paper, some text within the body harks to Turing's flawed first effort.
Bernays' corrections may be found in ; the original is to be found as "On Computable Numbers, with an Application to the Entscheidungsproblem. A Correction," Proceedings of the London Mathematical Society (2), 43 (1938), 544-546.
The on-line version of Turing's paper has these corrections in an addendum; however, corrections to the Universal Machine must be found in an analysis provided by Emil Post.
At first, the only mathematician to pay close attention to the details of the proof was Post (cf. Hodges p. 125) — mainly because he had arrived simultaneously at a similar reduction of "algorithm" to primitive machine-like actions, so he took a personal interest in the proof. Strangely (perhaps World War II intervened) it took Post some ten years to dissect it in the Appendix to his paper Recursive Unsolvability of a Problem of Thue, 1947.
Other problems present themselves:
|
https://en.wikipedia.org/wiki/Turing%27s_proof
|
Strangely (perhaps World War II intervened) it took Post some ten years to dissect it in the Appendix to his paper Recursive Unsolvability of a Problem of Thue, 1947.
Other problems present themselves: In his Appendix Post commented indirectly on the paper's difficulty and directly on its "outline nature" and "intuitive form" of the proofs. Post had to infer various points:
Anyone who has ever tried to read the paper will understand Hodges' complaint:
## Glossary of terms used by Turing
1 computable number — a number whose decimal is computable by a machine (i.e., by finite means such as an algorithm)
2 M — a machine with a finite instruction table and a scanning/printing head. M moves an infinite tape divided into squares each “capable of bearing a symbol”. The machine-instructions are only the following: move one square left, move one square right, on the scanned square print symbol p, erase the scanned square, if the symbol is p then do instruction aaa, if the scanned symbol is not p then do instruction aaa, if the scanned symbol is none then do instruction aaa, if the scanned symbol is any do instruction aaa [where “aaa” is an instruction-identifier].
|
https://en.wikipedia.org/wiki/Turing%27s_proof
|
The machine-instructions are only the following: move one square left, move one square right, on the scanned square print symbol p, erase the scanned square, if the symbol is p then do instruction aaa, if the scanned symbol is not p then do instruction aaa, if the scanned symbol is none then do instruction aaa, if the scanned symbol is any do instruction aaa [where “aaa” is an instruction-identifier].
3 computing machine — an M that prints two kinds of symbols, symbols of the first type are called “figures” and are only binary symbols 1 and 0; symbols of the second type are any other symbols.
4 figures — symbols 1 and 0, a.k.a. “symbols of the first kind”
5 m-configuration — the instruction-identifier, either a symbol in the instruction table, or a string of symbols representing the instruction- number on the tape of the universal machine (e.g. "DAAAAA = #5")
6 symbols of the second kind — any symbols other than 1 and 0
7 circular — an unsuccessful computating machine. It fails to print, ad infinitum, the figures 0 or 1 that represent in binary the number it computes
8 circle-free — a successful computating machine.
|
https://en.wikipedia.org/wiki/Turing%27s_proof
|
“symbols of the first kind”
5 m-configuration — the instruction-identifier, either a symbol in the instruction table, or a string of symbols representing the instruction- number on the tape of the universal machine (e.g. "DAAAAA = #5")
6 symbols of the second kind — any symbols other than 1 and 0
7 circular — an unsuccessful computating machine. It fails to print, ad infinitum, the figures 0 or 1 that represent in binary the number it computes
8 circle-free — a successful computating machine. It prints, ad infinitum, the figures 0 or 1 that represent in binary the number it computes
9 sequence — as in “sequence computed by the machine”: symbols of the first kind a.k.a. figures a.k.a. symbols 0 and 1.
10 computable sequence — can be computed by a circle-free machine
11 S.D – Standard Description: a sequence of symbols A, C, D, L, R, N, “;” on a Turing machine tape
12 D.N — Description number: an S.D converted to a number: 1=A, 2=C, 3 =D, 4=L, 5=R, 6=N, 7=;
13 M(n) — a machine whose D.N is number “n”
14 satisfactory — a S.D or D.N that represents a circle-free machine
15 U — a machine equipped with a “universal” table of instructions.
|
https://en.wikipedia.org/wiki/Turing%27s_proof
|
It fails to print, ad infinitum, the figures 0 or 1 that represent in binary the number it computes
8 circle-free — a successful computating machine. It prints, ad infinitum, the figures 0 or 1 that represent in binary the number it computes
9 sequence — as in “sequence computed by the machine”: symbols of the first kind a.k.a. figures a.k.a. symbols 0 and 1.
10 computable sequence — can be computed by a circle-free machine
11 S.D – Standard Description: a sequence of symbols A, C, D, L, R, N, “;” on a Turing machine tape
12 D.N — Description number: an S.D converted to a number: 1=A, 2=C, 3 =D, 4=L, 5=R, 6=N, 7=;
13 M(n) — a machine whose D.N is number “n”
14 satisfactory — a S.D or D.N that represents a circle-free machine
15 U — a machine equipped with a “universal” table of instructions. If U is “supplied with a tape on the beginning of which is written the S.D of some computing machine M, U will compute the same sequence as M.”
16 β’—“beta-primed”: A so-called “diagonal number” made up of the n-th figure (i.e. 0 or 1) of the n-th computable sequence
|
https://en.wikipedia.org/wiki/Turing%27s_proof
|
It prints, ad infinitum, the figures 0 or 1 that represent in binary the number it computes
9 sequence — as in “sequence computed by the machine”: symbols of the first kind a.k.a. figures a.k.a. symbols 0 and 1.
10 computable sequence — can be computed by a circle-free machine
11 S.D – Standard Description: a sequence of symbols A, C, D, L, R, N, “;” on a Turing machine tape
12 D.N — Description number: an S.D converted to a number: 1=A, 2=C, 3 =D, 4=L, 5=R, 6=N, 7=;
13 M(n) — a machine whose D.N is number “n”
14 satisfactory — a S.D or D.N that represents a circle-free machine
15 U — a machine equipped with a “universal” table of instructions. If U is “supplied with a tape on the beginning of which is written the S.D of some computing machine M, U will compute the same sequence as M.”
16 β’—“beta-primed”: A so-called “diagonal number” made up of the n-th figure (i.e. 0 or 1) of the n-th computable sequence [also: the computable number of H, see below]
17 u — an unsatisfactory, i.e. circular, S.D
18 s — satisfactory, i.e. circle-free S.D
19 D — a machine contained in H (see below).
|
https://en.wikipedia.org/wiki/Turing%27s_proof
|
If U is “supplied with a tape on the beginning of which is written the S.D of some computing machine M, U will compute the same sequence as M.”
16 β’—“beta-primed”: A so-called “diagonal number” made up of the n-th figure (i.e. 0 or 1) of the n-th computable sequence [also: the computable number of H, see below]
17 u — an unsatisfactory, i.e. circular, S.D
18 s — satisfactory, i.e. circle-free S.D
19 D — a machine contained in H (see below). When supplied with the S.D of any computing machine M, D will test M's S.D and if circular mark it with “u” and if circle-free mark it with “s”
20 H — a computing machine. H computes B’, maintains R and N. H contains D and U and an unspecified machine (or process) that maintains N and R and provides D with the equivalent S.D of N. E also computes the figures of B’ and assembles the figures of B’.
21 R — a record, or tally, of the quantity of successful (circle-free) S.D tested by D
22 N — a number, starting with 1, to be converted into an S.D by machine E. E maintains N.
23 K — a number.
|
https://en.wikipedia.org/wiki/Turing%27s_proof
|
H computes B’, maintains R and N. H contains D and U and an unspecified machine (or process) that maintains N and R and provides D with the equivalent S.D of N. E also computes the figures of B’ and assembles the figures of B’.
21 R — a record, or tally, of the quantity of successful (circle-free) S.D tested by D
22 N — a number, starting with 1, to be converted into an S.D by machine E. E maintains N.
23 K — a number. The D.N of H.
Required for Proof #3
5 m-configuration — the instruction-identifier, either a symbol in the instruction table, or a string of symbols representing the instruction's number on the tape of the universal machine (e.g. "DAAAAA = instruction #5"). In Turing's S.D the m-configuration appears twice in each instruction, the left-most string is the "current instruction"; the right-most string is the next instruction.
|
https://en.wikipedia.org/wiki/Turing%27s_proof
|
The D.N of H.
Required for Proof #3
5 m-configuration — the instruction-identifier, either a symbol in the instruction table, or a string of symbols representing the instruction's number on the tape of the universal machine (e.g. "DAAAAA = instruction #5"). In Turing's S.D the m-configuration appears twice in each instruction, the left-most string is the "current instruction"; the right-most string is the next instruction.
24 complete configuration — the number (figure 1 or 0) of the scanned square, the complete sequence of all symbols on the tape, and the m-configuration (the instruction-identifier, either a symbol or a string of symbols representing a number, e.g. "instruction DAAAA = #5")
25 RSi(x, y) — "in the complete configuration x of M the symbol on square y is Si; "complete configuration" is definition #5
26 I(x, y) — "in the complete configuration x of M the square y is scanned"
27 Kqm(x) — "in the complete configuration x of M the machine-configuration (instruction number) is qm"
28 F(x,y) — "y is the immediate successor of x" (follows Gödel's use of "f" as the successor-function).
|
https://en.wikipedia.org/wiki/Turing%27s_proof
|
In Turing's S.D the m-configuration appears twice in each instruction, the left-most string is the "current instruction"; the right-most string is the next instruction.
24 complete configuration — the number (figure 1 or 0) of the scanned square, the complete sequence of all symbols on the tape, and the m-configuration (the instruction-identifier, either a symbol or a string of symbols representing a number, e.g. "instruction DAAAA = #5")
25 RSi(x, y) — "in the complete configuration x of M the symbol on square y is Si; "complete configuration" is definition #5
26 I(x, y) — "in the complete configuration x of M the square y is scanned"
27 Kqm(x) — "in the complete configuration x of M the machine-configuration (instruction number) is qm"
28 F(x,y) — "y is the immediate successor of x" (follows Gödel's use of "f" as the successor-function).
29 G(x,y) — "x precedes y", not necessarily immediately
30 Inst{qi, Sj Sk L ql} is an abbreviation, as are Inst{qi, Sj Sk R ql}, and Inst{qi, Sj Sk N ql}. See below.
|
https://en.wikipedia.org/wiki/Turing%27s_proof
|
29 G(x,y) — "x precedes y", not necessarily immediately
30 Inst{qi, Sj Sk L ql} is an abbreviation, as are Inst{qi, Sj Sk R ql}, and Inst{qi, Sj Sk N ql}. See below.
Turing reduces his instruction set to three “canonical forms” – one for Left, Right, and No-movement. Si and Sk are symbols on the tape.
tape Final m-config Symbol Operations m-config qi Si PSk, L qm qi Si PSk, R qm qi Si PSk, N qm
For example, the operations in the first line are PSk = PRINT symbol Sk from the collection A, C, D, 0, 1, u, v, w, x, y, z, :, then move tape LEFT.
These he further abbreviated as:
(N1) qi Sj Sk L qm
(N2) qi Sj Sk R qm
(N3) qi Sj Sk N qm
In Proof #3 he calls the first of these “Inst{qi Sj Sk L ql}”, and he shows how to write the entire machine S.D as the logical conjunction (logical OR): this string is called “Des(M)”, as in “Description-of-M”.
|
https://en.wikipedia.org/wiki/Turing%27s_proof
|
tape Final m-config Symbol Operations m-config qi Si PSk, L qm qi Si PSk, R qm qi Si PSk, N qm
For example, the operations in the first line are PSk = PRINT symbol Sk from the collection A, C, D, 0, 1, u, v, w, x, y, z, :, then move tape LEFT.
These he further abbreviated as:
(N1) qi Sj Sk L qm
(N2) qi Sj Sk R qm
(N3) qi Sj Sk N qm
In Proof #3 he calls the first of these “Inst{qi Sj Sk L ql}”, and he shows how to write the entire machine S.D as the logical conjunction (logical OR): this string is called “Des(M)”, as in “Description-of-M”.
i.e. if the machine prints 0 then 1's and 0's on alternate squares to the right ad infinitum it might have the table (a similar example appears on page 119):
(This has been reduced to canonical form with the “p-blank” instructions so it differs a bit from Turing's example.)
If put them into the “ Inst( ) form” the instructions will be the following (remembering: S0 is blank, S1 = 0, S2 = 1):
The reduction to the Standard Description (S.D) will be:
|
https://en.wikipedia.org/wiki/Turing%27s_proof
|
i.e. if the machine prints 0 then 1's and 0's on alternate squares to the right ad infinitum it might have the table (a similar example appears on page 119):
(This has been reduced to canonical form with the “p-blank” instructions so it differs a bit from Turing's example.)
If put them into the “ Inst( ) form” the instructions will be the following (remembering: S0 is blank, S1 = 0, S2 = 1):
The reduction to the Standard Description (S.D) will be:
This agrees with his example in the book (there will be a blank between each letter and number). Universal machine U uses the alternate blank squares as places to put "pointers".
## Notes
## References
### Citations
### Works cited
- The two papers of Post referenced above are included in this volume. Other papers include those by Gödel, Church, Rosser, and Kleene.
-
-
- Cf. Chapter "The Spirit of Truth" for a history leading to, and a discussion of, his proof.
-
-
- This is the epochal paper where Turing defines Turing machines, shows that the Entscheidungsproblem is unsolvable.
Category:1937 in science
Category:
|
https://en.wikipedia.org/wiki/Turing%27s_proof
|
-
-
- Cf. Chapter "The Spirit of Truth" for a history leading to, and a discussion of, his proof.
-
-
- This is the epochal paper where Turing defines Turing machines, shows that the Entscheidungsproblem is unsolvable.
Category:1937 in science
Category: Articles containing proofs
Category:Mathematical logic
Category:Mathematical proofs
Category:Theory of computation
Category:20th century in mathematics
Category:Public domain books
|
https://en.wikipedia.org/wiki/Turing%27s_proof
|
In abstract algebra, group theory studies the algebraic structures known as groups.
The concept of a group is central to abstract algebra: other well-known algebraic structures, such as rings, fields, and vector spaces, can all be seen as groups endowed with additional operations and axioms. Groups recur throughout mathematics, and the methods of group theory have influenced many parts of algebra. Linear algebraic groups and Lie groups are two branches of group theory that have experienced advances and have become subject areas in their own right.
Various physical systems, such as crystals and the hydrogen atom, and three of the four known fundamental forces in the universe, may be modelled by symmetry groups. Thus group theory and the closely related representation theory have many important applications in physics, chemistry, and materials science. Group theory is also central to public key cryptography.
The early history of group theory dates from the 19th century. One of the most important mathematical achievements of the 20th century was the collaborative effort, taking up more than 10,000 journal pages and mostly published between 1960 and 2004, that culminated in a complete classification of finite simple groups.
## History
Group theory has three main historical sources: number theory, the theory of algebraic equations, and geometry.
|
https://en.wikipedia.org/wiki/Group_theory
|
One of the most important mathematical achievements of the 20th century was the collaborative effort, taking up more than 10,000 journal pages and mostly published between 1960 and 2004, that culminated in a complete classification of finite simple groups.
## History
Group theory has three main historical sources: number theory, the theory of algebraic equations, and geometry. The number-theoretic strand was begun by Leonhard Euler, and developed by Gauss's work on modular arithmetic and additive and multiplicative groups related to quadratic fields. Early results about permutation groups were obtained by Lagrange, Ruffini, and Abel in their quest for general solutions of polynomial equations of high degree. Évariste Galois coined the term "group" and established a connection, now known as
### Galois theory
, between the nascent theory of groups and field theory. In geometry, groups first became important in projective geometry and, later, non-Euclidean geometry. Felix Klein's Erlangen program proclaimed group theory to be the organizing principle of geometry.
Galois, in the 1830s, was the first to employ groups to determine the solvability of polynomial equations. Arthur Cayley and Augustin Louis Cauchy pushed these investigations further by creating the theory of permutation groups. The second historical source for groups stems from geometrical situations.
|
https://en.wikipedia.org/wiki/Group_theory
|
Arthur Cayley and Augustin Louis Cauchy pushed these investigations further by creating the theory of permutation groups. The second historical source for groups stems from geometrical situations. In an attempt to come to grips with possible geometries (such as euclidean, hyperbolic or projective geometry) using group theory, Felix Klein initiated the Erlangen programme. Sophus Lie, in 1884, started using groups (now called Lie groups) attached to analytic problems. Thirdly, groups were, at first implicitly and later explicitly, used in algebraic number theory.
The different scope of these early sources resulted in different notions of groups. The theory of groups was unified starting around 1880. Since then, the impact of group theory has been ever growing, giving rise to the birth of abstract algebra in the early 20th century, representation theory, and many more influential spin-off domains. The classification of finite simple groups is a vast body of work from the mid 20th century, classifying all the finite simple groups.
## Main classes of groups
The range of groups being considered has gradually expanded from finite permutation groups and special examples of matrix groups to abstract groups that may be specified through a presentation by generators and relations.
### Permutation groups
The first class of groups to undergo a systematic study was permutation groups.
|
https://en.wikipedia.org/wiki/Group_theory
|
## Main classes of groups
The range of groups being considered has gradually expanded from finite permutation groups and special examples of matrix groups to abstract groups that may be specified through a presentation by generators and relations.
### Permutation groups
The first class of groups to undergo a systematic study was permutation groups. Given any set X and a collection G of bijections of X into itself (known as permutations) that is closed under compositions and inverses, G is a group acting on X. If X consists of n elements and G consists of all permutations, G is the symmetric group Sn; in general, any permutation group G is a subgroup of the symmetric group of X. An early construction due to Cayley exhibited any group as a permutation group, acting on itself () by means of the left regular representation.
In many cases, the structure of a permutation group can be studied using the properties of its action on the corresponding set. For example, in this way one proves that for , the alternating group An is simple, i.e. does not admit any proper normal subgroups. This fact plays a key role in the impossibility of solving a general algebraic equation of degree in radicals.
### Matrix groups
The next important class of groups is given by matrix groups, or linear groups.
|
https://en.wikipedia.org/wiki/Group_theory
|
This fact plays a key role in the impossibility of solving a general algebraic equation of degree in radicals.
### Matrix groups
The next important class of groups is given by matrix groups, or linear groups. Here G is a set consisting of invertible matrices of given order n over a field K that is closed under the products and inverses. Such a group acts on the n-dimensional vector space Kn by linear transformations. This action makes matrix groups conceptually similar to permutation groups, and the geometry of the action may be usefully exploited to establish properties of the group G.
### Transformation groups
Permutation groups and matrix groups are special cases of transformation groups: groups that act on a certain space X preserving its inherent structure. In the case of permutation groups, X is a set; for matrix groups, X is a vector space. The concept of a transformation group is closely related with the concept of a symmetry group: transformation groups frequently consist of all transformations that preserve a certain structure.
The theory of transformation groups forms a bridge connecting group theory with differential geometry. A long line of research, originating with Lie and Klein, considers group actions on manifolds by homeomorphisms or diffeomorphisms. The groups themselves may be discrete or continuous.
###
|
https://en.wikipedia.org/wiki/Group_theory
|
The groups themselves may be discrete or continuous.
### Abstract groups
Most groups considered in the first stage of the development of group theory were "concrete", having been realized through numbers, permutations, or matrices. It was not until the late nineteenth century that the idea of an abstract group began to take hold, where "abstract" means that the nature of the elements are ignored in such a way that two isomorphic groups are considered as the same group. A typical way of specifying an abstract group is through a presentation by generators and relations,
$$
G = \langle S|R\rangle.
$$
A significant source of abstract groups is given by the construction of a factor group, or quotient group, G/H, of a group G by a normal subgroup H. Class groups of algebraic number fields were among the earliest examples of factor groups, of much interest in number theory. If a group G is a permutation group on a set X, the factor group G/H is no longer acting on X; but the idea of an abstract group permits one not to worry about this discrepancy.
The change of perspective from concrete to abstract groups makes it natural to consider properties of groups that are independent of a particular realization, or in modern language, invariant under isomorphism, as well as the classes of group with a given such property: finite groups, periodic groups, simple groups, solvable groups, and so on.
|
https://en.wikipedia.org/wiki/Group_theory
|
If a group G is a permutation group on a set X, the factor group G/H is no longer acting on X; but the idea of an abstract group permits one not to worry about this discrepancy.
The change of perspective from concrete to abstract groups makes it natural to consider properties of groups that are independent of a particular realization, or in modern language, invariant under isomorphism, as well as the classes of group with a given such property: finite groups, periodic groups, simple groups, solvable groups, and so on. Rather than exploring properties of an individual group, one seeks to establish results that apply to a whole class of groups. The new paradigm was of paramount importance for the development of mathematics: it foreshadowed the creation of abstract algebra in the works of Hilbert, Emil Artin, Emmy Noether, and mathematicians of their school.
### Groups with additional structure
An important elaboration of the concept of a group occurs if G is endowed with additional structure, notably, of a topological space, differentiable manifold, or algebraic variety. If the multiplication and inversion of the group are compatible with this structure, that is, they are continuous, smooth or regular (in the sense of algebraic geometry) maps, then G is a topological group, a Lie group, or an algebraic group.
|
https://en.wikipedia.org/wiki/Group_theory
|
### Groups with additional structure
An important elaboration of the concept of a group occurs if G is endowed with additional structure, notably, of a topological space, differentiable manifold, or algebraic variety. If the multiplication and inversion of the group are compatible with this structure, that is, they are continuous, smooth or regular (in the sense of algebraic geometry) maps, then G is a topological group, a Lie group, or an algebraic group.
The presence of extra structure relates these types of groups with other mathematical disciplines and means that more tools are available in their study. Topological groups form a natural domain for abstract harmonic analysis, whereas Lie groups (frequently realized as transformation groups) are the mainstays of differential geometry and unitary representation theory. Certain classification questions that cannot be solved in general can be approached and resolved for special subclasses of groups. Thus, compact connected Lie groups have been completely classified.
|
https://en.wikipedia.org/wiki/Group_theory
|
Certain classification questions that cannot be solved in general can be approached and resolved for special subclasses of groups. Thus, compact connected Lie groups have been completely classified. There is a fruitful relation between infinite abstract groups and topological groups: whenever a group Γ can be realized as a lattice in a topological group G, the geometry and analysis pertaining to G yield important results about Γ. A comparatively recent trend in the theory of finite groups exploits their connections with compact topological groups (profinite groups): for example, a single p-adic analytic group G has a family of quotients which are finite p-groups of various orders, and properties of G translate into the properties of its finite quotients.
## Branches of group theory
### Finite group theory
During the twentieth century, mathematicians investigated some aspects of the theory of finite groups in great depth, especially the local theory of finite groups and the theory of solvable and nilpotent groups. As a consequence, the complete classification of finite simple groups was achieved, meaning that all those simple groups from which all finite groups can be built are now known.
During the second half of the twentieth century, mathematicians such as Chevalley and Steinberg also increased our understanding of finite analogs of classical groups, and other related groups. One such family of groups is the family of general linear groups over finite fields.
|
https://en.wikipedia.org/wiki/Group_theory
|
During the second half of the twentieth century, mathematicians such as Chevalley and Steinberg also increased our understanding of finite analogs of classical groups, and other related groups. One such family of groups is the family of general linear groups over finite fields.
Finite groups often occur when considering symmetry of mathematical or
physical objects, when those objects admit just a finite number of structure-preserving transformations. The theory of Lie groups,
which may be viewed as dealing with "continuous symmetry", is strongly influenced by the associated Weyl groups. These are finite groups generated by reflections which act on a finite-dimensional Euclidean space. The properties of finite groups can thus play a role in subjects such as theoretical physics and chemistry.
### Representation of groups
Saying that a group G acts on a set X means that every element of G defines a bijective map on the set X in a way compatible with the group structure. When X has more structure, it is useful to restrict this notion further: a representation of G on a vector space V is a group homomorphism:
$$
\rho:G \to \operatorname{GL}(V),
$$
where GL(V) consists of the invertible linear transformations of V.
|
https://en.wikipedia.org/wiki/Group_theory
|
### Representation of groups
Saying that a group G acts on a set X means that every element of G defines a bijective map on the set X in a way compatible with the group structure. When X has more structure, it is useful to restrict this notion further: a representation of G on a vector space V is a group homomorphism:
$$
\rho:G \to \operatorname{GL}(V),
$$
where GL(V) consists of the invertible linear transformations of V. In other words, to every group element g is assigned an automorphism ρ(g) such that for any h in G.
This definition can be understood in two directions, both of which give rise to whole new domains of mathematics. On the one hand, it may yield new information about the group G: often, the group operation in G is abstractly given, but via ρ, it corresponds to the multiplication of matrices, which is very explicit. On the other hand, given a well-understood group acting on a complicated object, this simplifies the study of the object in question. For example, if G is finite, it is known that V above decomposes into irreducible parts (see Maschke's theorem). These parts, in turn, are much more easily manageable than the whole V (via Schur's lemma).
Given a group G, representation theory then asks what representations of G exist.
|
https://en.wikipedia.org/wiki/Group_theory
|
These parts, in turn, are much more easily manageable than the whole V (via Schur's lemma).
Given a group G, representation theory then asks what representations of G exist. There are several settings, and the employed methods and obtained results are rather different in every case: representation theory of finite groups and representations of Lie groups are two main subdomains of the theory. The totality of representations is governed by the group's characters. For example, Fourier polynomials can be interpreted as the characters of U(1), the group of complex numbers of absolute value 1, acting on the L2-space of periodic functions.
### Lie theory
A Lie group is a group that is also a differentiable manifold, with the property that the group operations are compatible with the smooth structure. Lie groups are named after Sophus Lie, who laid the foundations of the theory of continuous transformation groups. The term groupes de Lie first appeared in French in 1893 in the thesis of Lie's student Arthur Tresse, page 3.
Lie groups represent the best-developed theory of continuous symmetry of mathematical objects and structures, which makes them indispensable tools for many parts of contemporary mathematics, as well as for modern theoretical physics.
|
https://en.wikipedia.org/wiki/Group_theory
|
The term groupes de Lie first appeared in French in 1893 in the thesis of Lie's student Arthur Tresse, page 3.
Lie groups represent the best-developed theory of continuous symmetry of mathematical objects and structures, which makes them indispensable tools for many parts of contemporary mathematics, as well as for modern theoretical physics. They provide a natural framework for analysing the continuous symmetries of differential equations (differential Galois theory), in much the same way as permutation groups are used in Galois theory for analysing the discrete symmetries of algebraic equations. An extension of Galois theory to the case of continuous symmetry groups was one of Lie's principal motivations.
### Combinatorial and geometric group theory
Groups can be described in different ways. Finite groups can be described by writing down the group table consisting of all possible multiplications . A more compact way of defining a group is by generators and relations, also called the presentation of a group. Given any set F of generators
$$
\{g_i\}_{i\in I}
$$
, the free group generated by F surjects onto the group G. The kernel of this map is called the subgroup of relations, generated by some subset D. The presentation is usually denoted by
$$
\langle F \mid D\rangle.
$$
|
https://en.wikipedia.org/wiki/Group_theory
|
Given any set F of generators
$$
\{g_i\}_{i\in I}
$$
, the free group generated by F surjects onto the group G. The kernel of this map is called the subgroup of relations, generated by some subset D. The presentation is usually denoted by
$$
\langle F \mid D\rangle.
$$
For example, the group presentation
$$
\langle a,b\mid aba^{-1}b^{-1}\rangle
$$
describes a group which is isomorphic to
$$
\mathbb{Z}\times\mathbb{Z}.
$$
A string consisting of generator symbols and their inverses is called a word.
Combinatorial group theory studies groups from the perspective of generators and relations. It is particularly useful where finiteness assumptions are satisfied, for example finitely generated groups, or finitely presented groups (i.e. in addition the relations are finite). The area makes use of the connection of graphs via their fundamental groups. A fundamental theorem of this area is that every subgroup of a free group is free.
There are several natural questions arising from giving a group by its presentation. The word problem asks whether two words are effectively the same group element. By relating the problem to Turing machines, one can show that there is in general no algorithm solving this task.
|
https://en.wikipedia.org/wiki/Group_theory
|
The word problem asks whether two words are effectively the same group element. By relating the problem to Turing machines, one can show that there is in general no algorithm solving this task. Another, generally harder, algorithmically insoluble problem is the group isomorphism problem, which asks whether two groups given by different presentations are actually isomorphic. For example, the group with presentation
$$
\langle x,y \mid xyxyx = e \rangle,
$$
is isomorphic to the additive group Z of integers, although this may not be immediately apparent. (Writing
$$
z=xy
$$
, one has
$$
G \cong \langle z,y \mid z^3 = y\rangle \cong \langle z\rangle.
$$
)
Geometric group theory attacks these problems from a geometric viewpoint, either by viewing groups as geometric objects, or by finding suitable geometric objects a group acts on. The first idea is made precise by means of the Cayley graph, whose vertices correspond to group elements and edges correspond to right multiplication in the group. Given two elements, one constructs the word metric given by the length of the minimal path between the elements.
|
https://en.wikipedia.org/wiki/Group_theory
|
The first idea is made precise by means of the Cayley graph, whose vertices correspond to group elements and edges correspond to right multiplication in the group. Given two elements, one constructs the word metric given by the length of the minimal path between the elements. A theorem of Milnor and Svarc then says that given a group G acting in a reasonable manner on a metric space X, for example a compact manifold, then G is quasi-isometric (i.e. looks similar from a distance) to the space X.
## Connection of groups and symmetry
Given a structured object X of any sort , a symmetry is a mapping of the object onto itself which preserves the structure. This occurs in many cases, for example
- If X is a set with no additional structure, a symmetry is a bijective map from the set to itself, giving rise to permutation groups.
- If the object X is a set of points in the plane with its metric structure or any other metric space, a symmetry is a bijection of the set to itself which preserves the distance between each pair of points (an isometry). The corresponding group is called isometry group of X.
- If instead angles are preserved, one speaks of conformal maps. Conformal maps give rise to Kleinian groups, for example.
|
https://en.wikipedia.org/wiki/Group_theory
|
The corresponding group is called isometry group of X.
- If instead angles are preserved, one speaks of conformal maps. Conformal maps give rise to Kleinian groups, for example.
- Symmetries are not restricted to geometrical objects, but include algebraic objects as well. For instance, the equation
$$
x^2-3=0
$$
has the two solutions
$$
\sqrt{3}
$$
and
$$
-\sqrt{3}
$$
. In this case, the group that exchanges the two roots is the Galois group belonging to the equation. Every polynomial equation in one variable has a Galois group, that is a certain permutation group on its roots.
The axioms of a group formalize the essential aspects of symmetry. Symmetries form a group: they are closed because if you take a symmetry of an object, and then apply another symmetry, the result will still be a symmetry. The identity keeping the object fixed is always a symmetry of an object. Existence of inverses is guaranteed by undoing the symmetry and the associativity comes from the fact that symmetries are functions on a space, and composition of functions is associative.
Frucht's theorem says that every group is the symmetry group of some graph.
|
https://en.wikipedia.org/wiki/Group_theory
|
Existence of inverses is guaranteed by undoing the symmetry and the associativity comes from the fact that symmetries are functions on a space, and composition of functions is associative.
Frucht's theorem says that every group is the symmetry group of some graph. So every abstract group is actually the symmetries of some explicit object.
The saying of "preserving the structure" of an object can be made precise by working in a category. Maps preserving the structure are then the morphisms, and the symmetry group is the automorphism group of the object in question.
## Applications of group theory
Applications of group theory abound. Almost all structures in abstract algebra are special cases of groups. Rings, for example, can be viewed as abelian groups (corresponding to addition) together with a second operation (corresponding to multiplication). Therefore, group theoretic arguments underlie large parts of the theory of those entities.
Galois theory
Galois theory uses groups to describe the symmetries of the roots of a polynomial (or more precisely the automorphisms of the algebras generated by these roots). The fundamental theorem of Galois theory provides a link between algebraic field extensions and group theory.
|
https://en.wikipedia.org/wiki/Group_theory
|
Galois theory
Galois theory uses groups to describe the symmetries of the roots of a polynomial (or more precisely the automorphisms of the algebras generated by these roots). The fundamental theorem of Galois theory provides a link between algebraic field extensions and group theory. It gives an effective criterion for the solvability of polynomial equations in terms of the solvability of the corresponding Galois group. For example, S5, the symmetric group in 5 elements, is not solvable which implies that the general quintic equation cannot be solved by radicals in the way equations of lower degree can. The theory, being one of the historical roots of group theory, is still fruitfully applied to yield new results in areas such as class field theory.
### Algebraic topology
Algebraic topology is another domain which prominently associates groups to the objects the theory is interested in. There, groups are used to describe certain invariants of topological spaces. They are called "invariants" because they are defined in such a way that they do not change if the space is subjected to some deformation. For example, the fundamental group "counts" how many paths in the space are essentially different. The Poincaré conjecture, proved in 2002/2003 by Grigori Perelman, is a prominent application of this idea.
|
https://en.wikipedia.org/wiki/Group_theory
|
For example, the fundamental group "counts" how many paths in the space are essentially different. The Poincaré conjecture, proved in 2002/2003 by Grigori Perelman, is a prominent application of this idea. The influence is not unidirectional, though. For example, algebraic topology makes use of Eilenberg–MacLane spaces which are spaces with prescribed homotopy groups. Similarly algebraic K-theory relies in a way on classifying spaces of groups. Finally, the name of the torsion subgroup of an infinite group shows the legacy of topology in group theory.
### Algebraic geometry
Algebraic geometry likewise uses group theory in many ways. Abelian varieties have been introduced above. The presence of the group operation yields additional information which makes these varieties particularly accessible. They also often serve as a test for new conjectures. (For example the Hodge conjecture (in certain cases).) The one-dimensional case, namely elliptic curves is studied in particular detail. They are both theoretically and practically intriguing. In another direction, toric varieties are algebraic varieties acted on by a torus. Toroidal embeddings have recently led to advances in algebraic geometry, in particular resolution of singularities.
|
https://en.wikipedia.org/wiki/Group_theory
|
In another direction, toric varieties are algebraic varieties acted on by a torus. Toroidal embeddings have recently led to advances in algebraic geometry, in particular resolution of singularities.
### Algebraic number theory
Algebraic number theory makes uses of groups for some important applications. For example, Euler's product formula,
$$
\begin{align}
\sum_{n\geq 1}\frac{1}{n^s}& = \prod_{p \text{ prime}} \frac{1}{1-p^{-s}}, \\
\end{align}
\!
$$
captures the fact that any integer decomposes in a unique way into primes. The failure of this statement for more general rings gives rise to class groups and regular primes, which feature in Kummer's treatment of Fermat's Last Theorem.
### Harmonic analysis
Analysis on Lie groups and certain other groups is called harmonic analysis. Haar measures, that is, integrals invariant under the translation in a Lie group, are used for pattern recognition and other image processing techniques.
### Combinatorics
In combinatorics, the notion of permutation group and the concept of group action are often used to simplify the counting of a set of objects; see in particular Burnside's lemma.
|
https://en.wikipedia.org/wiki/Group_theory
|
Haar measures, that is, integrals invariant under the translation in a Lie group, are used for pattern recognition and other image processing techniques.
### Combinatorics
In combinatorics, the notion of permutation group and the concept of group action are often used to simplify the counting of a set of objects; see in particular Burnside's lemma.
### Music
The presence of the 12-periodicity in the circle of fifths yields applications of elementary group theory in musical set theory. Transformational theory models musical transformations as elements of a mathematical group.
### Physics
In physics, groups are important because they describe the symmetries which the laws of physics seem to obey. According to Noether's theorem, every continuous symmetry of a physical system corresponds to a conservation law of the system. Physicists are very interested in group representations, especially of Lie groups, since these representations often point the way to the "possible" physical theories. Examples of the use of groups in physics include the Standard Model, gauge theory, the Lorentz group, and the Poincaré group.
Group theory can be used to resolve the incompleteness of the statistical interpretations of mechanics developed by Willard Gibbs, relating to the summing of an infinite number of probabilities to yield a meaningful solution.
|
https://en.wikipedia.org/wiki/Group_theory
|
Examples of the use of groups in physics include the Standard Model, gauge theory, the Lorentz group, and the Poincaré group.
Group theory can be used to resolve the incompleteness of the statistical interpretations of mechanics developed by Willard Gibbs, relating to the summing of an infinite number of probabilities to yield a meaningful solution.
### Chemistry and materials science
In chemistry and materials science, point groups are used to classify regular polyhedra, and the symmetries of molecules, and space groups to classify crystal structures. The assigned groups can then be used to determine physical properties (such as chemical polarity and chirality), spectroscopic properties (particularly useful for Raman spectroscopy, infrared spectroscopy, circular dichroism spectroscopy, magnetic circular dichroism spectroscopy, UV/Vis spectroscopy, and fluorescence spectroscopy), and to construct molecular orbitals.
Molecular symmetry is responsible for many physical and spectroscopic properties of compounds and provides relevant information about how chemical reactions occur. In order to assign a point group for any given molecule, it is necessary to find the set of symmetry operations present on it. The symmetry operation is an action, such as a rotation around an axis or a reflection through a mirror plane. In other words, it is an operation that moves the molecule such that it is indistinguishable from the original configuration.
|
https://en.wikipedia.org/wiki/Group_theory
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.