Dataset Viewer
Auto-converted to Parquet
text
stringlengths
105
4.17k
source
stringclasses
883 values
In statistics and applications of statistics, normalization can have a range of meanings. In the simplest cases, normalization of ratings means adjusting values measured on different scales to a notionally common scale, often prior to averaging. In more complicated cases, normalization may refer to more sophisticated adjustments where the intention is to bring the entire probability distributions of adjusted values into alignment. In the case of normalization of scores in educational assessment, there may be an intention to align distributions to a normal distribution. A different approach to normalization of probability distributions is quantile normalization, where the quantiles of the different measures are brought into alignment. In another usage in statistics, normalization refers to the creation of shifted and scaled versions of statistics, where the intention is that these normalized values allow the comparison of corresponding normalized values for different datasets in a way that eliminates the effects of certain gross influences, as in an anomaly time series. Some types of normalization involve only a rescaling, to arrive at values relative to some size variable. In terms of levels of measurement, such ratios only make sense for ratio measurements (where ratios of measurements are meaningful), not interval measurements (where only distances are meaningful, but not ratios).
https://en.wikipedia.org/wiki/Normalization_%28statistics%29
Some types of normalization involve only a rescaling, to arrive at values relative to some size variable. In terms of levels of measurement, such ratios only make sense for ratio measurements (where ratios of measurements are meaningful), not interval measurements (where only distances are meaningful, but not ratios). In theoretical statistics, parametric normalization can often lead to pivotal quantities – functions whose sampling distribution does not depend on the parameters – and to ancillary statistics – pivotal quantities that can be computed from observations, without knowing parameters. ## History ### Standard score (Z-score) The concept of normalization emerged alongside the study of the normal distribution by Abraham De Moivre, Pierre-Simon Laplace, and Carl Friedrich Gauss from the 18th to the 19th century. As the name “standard” refers to the particular normal distribution with expectation zero and standard deviation one, that is, the standard normal distribution, normalization, in this case, “standardization”, was then used to refer to the rescaling of any distribution or data set to have mean zero and standard deviation one.
https://en.wikipedia.org/wiki/Normalization_%28statistics%29
### Standard score (Z-score) The concept of normalization emerged alongside the study of the normal distribution by Abraham De Moivre, Pierre-Simon Laplace, and Carl Friedrich Gauss from the 18th to the 19th century. As the name “standard” refers to the particular normal distribution with expectation zero and standard deviation one, that is, the standard normal distribution, normalization, in this case, “standardization”, was then used to refer to the rescaling of any distribution or data set to have mean zero and standard deviation one. While the study of normal distribution structured the process of standardization, the result of this process, also known as the Z-score, given by the difference between sample value and population mean divided by population standard deviation and measuring the number of standard deviations of a value from its population mean, was not formalized and popularized until Ronald Fisher and Karl Pearson elaborated the concept as part of the broader framework of statistical inference and hypothesis testing in the early 20th century. ### Student’s t-Statistic William Sealy Gosset initiated the adjustment of normal distribution and standard score on small sample size. Educated in Chemistry and Mathematics at Winchester and Oxford, Gosset was employed by Guinness Brewery, the biggest brewer in Ireland back then, and was tasked with precise quality control.
https://en.wikipedia.org/wiki/Normalization_%28statistics%29
### Student’s t-Statistic William Sealy Gosset initiated the adjustment of normal distribution and standard score on small sample size. Educated in Chemistry and Mathematics at Winchester and Oxford, Gosset was employed by Guinness Brewery, the biggest brewer in Ireland back then, and was tasked with precise quality control. It was through small-sample experiments that Gosset discovered that the distribution of the means using small-scaled samples slightly deviated from the distribution of the means using large-scaled samples – the normal distribution – and appeared “taller and narrower” in comparison. This finding was later published in a Guinness internal report titled The application of the “Law of Error” to the work of the brewery and was sent to Karl Pearson for further discussion, which later yielded a formal publishment titled The probable error of a mean in the year of 1908. Under Guinness Brewery’s privacy restrictions, Gosset published the paper under the pseudo “Student”.
https://en.wikipedia.org/wiki/Normalization_%28statistics%29
This finding was later published in a Guinness internal report titled The application of the “Law of Error” to the work of the brewery and was sent to Karl Pearson for further discussion, which later yielded a formal publishment titled The probable error of a mean in the year of 1908. Under Guinness Brewery’s privacy restrictions, Gosset published the paper under the pseudo “Student”. Gosset’s work was later enhanced and transformed by Ronald Fisher to the form that is used today, and was, alongside the names “Student’s t distribution” – referring to the adjusted normal distribution Gosset proposed, and “Student’s t-statistic” – referring to the test statistic used in measuring the departure of the estimated value of a parameter from its hypothesized value divided by its standard error, popularized through Fisher’s publishment titled Applications of “Student’s” distribution. ### Feature Scaling The rise of computers and multivariate statistics in mid-20th century necessitated normalization to process data with different units, hatching feature scaling – a method used to rescale data to a fixed range – like min-max scaling and robust scaling. This modern normalization process especially targeting large-scaled data became more formalized in fields including machine learning, pattern recognition, and neural networks in late 20th century.
https://en.wikipedia.org/wiki/Normalization_%28statistics%29
### Feature Scaling The rise of computers and multivariate statistics in mid-20th century necessitated normalization to process data with different units, hatching feature scaling – a method used to rescale data to a fixed range – like min-max scaling and robust scaling. This modern normalization process especially targeting large-scaled data became more formalized in fields including machine learning, pattern recognition, and neural networks in late 20th century. ### Batch Normalization Batch normalization was proposed by Sergey Ioffe and Christian Szegedy in 2015 to enhance the efficiency of training in neural networks. ## Examples There are different types of normalizations in statistics – nondimensional ratios of errors, residuals, means and standard deviations, which are hence scale invariant – some of which may be summarized as follows. Note that in terms of levels of measurement, these ratios only make sense for ratio measurements (where ratios of measurements are meaningful), not interval measurements (where only distances are meaningful, but not ratios). See also Category:Statistical ratios. Name Formula Use Standard score Normalizing errors when population parameters are known. Works well for populations that are normally distributed Student's t-statistic the departure of the estimated value of a parameter from its hypothesized value, normalized by its standard error.
https://en.wikipedia.org/wiki/Normalization_%28statistics%29
Name Formula Use Standard score Normalizing errors when population parameters are known. Works well for populations that are normally distributed Student's t-statistic the departure of the estimated value of a parameter from its hypothesized value, normalized by its standard error. Studentized residual Normalizing residuals when parameters are estimated, particularly across different data points in regression analysis. Standardized moment Normalizing moments, using the standard deviation as a measure of scale. Coefficient of variation Normalizing dispersion, using the mean as a measure of scale, particularly for positive distribution such as the exponential distribution and Poisson distribution. Min-max feature scaling Feature scaling is used to bring all values into the range [0,1]. This is also called unity-based normalization. This can be generalized to restrict the range of values in the dataset between any arbitrary points and , using for example . Note that some other ratios, such as the variance-to-mean ratio $$ \left(\frac{\sigma^2}{\mu}\right) $$ , are also done for normalization, but are not nondimensional: the units do not cancel, and thus the ratio has units, and is not scale-invariant. ## Other types Other non-dimensional normalizations that can be used with no assumptions on the distribution include: - Assignment of percentiles. This is common on standardized tests.
https://en.wikipedia.org/wiki/Normalization_%28statistics%29
## Other types Other non-dimensional normalizations that can be used with no assumptions on the distribution include: - Assignment of percentiles. This is common on standardized tests. See also quantile normalization. - Normalization by adding and/or multiplying by constants so values fall between 0 and 1. This is used for probability density functions, with applications in fields such as quantum mechanics in assigning probabilities to .
https://en.wikipedia.org/wiki/Normalization_%28statistics%29
In mathematics, a unit vector in a normed vector space is a vector (often a spatial vector) of length 1. A unit vector is often denoted by a lowercase letter with a circumflex, or "hat", as in $$ \hat{\mathbf{v}} $$ (pronounced "v-hat"). The term normalized vector is sometimes used as a synonym for unit vector. The normalized vector û of a non-zero vector u is the unit vector in the direction of u, i.e., $$ \mathbf{\hat{u}} = \frac{\mathbf{u}}{\|\mathbf{u}\|}=(\frac{u_1}{\|\mathbf{u}\|}, \frac{u_2}{\|\mathbf{u}\|}, ... , \frac{u_n}{\|\mathbf{u}\|}) $$ where ‖u‖ is the norm (or length) of u and $$ \|\mathbf{u}\| = (u_1, u_2, ..., u_n) $$ .
https://en.wikipedia.org/wiki/Unit_vector
The term normalized vector is sometimes used as a synonym for unit vector. The normalized vector û of a non-zero vector u is the unit vector in the direction of u, i.e., $$ \mathbf{\hat{u}} = \frac{\mathbf{u}}{\|\mathbf{u}\|}=(\frac{u_1}{\|\mathbf{u}\|}, \frac{u_2}{\|\mathbf{u}\|}, ... , \frac{u_n}{\|\mathbf{u}\|}) $$ where ‖u‖ is the norm (or length) of u and $$ \|\mathbf{u}\| = (u_1, u_2, ..., u_n) $$ . The proof is the following: $$ \|\mathbf{\hat{u}}\|=\sqrt{\frac{u_1}{\sqrt{u_1^2+...+u_n^2}}^2+...+\frac{u_n}{\sqrt{u_1^2+...+u_n^2}}^2}=\sqrt{\frac{u_1^2+...+u_n^2}{u_1^2+...+u_n^2}}=\sqrt{1}=1 $$ A unit vector is often used to represent directions, such as normal directions.
https://en.wikipedia.org/wiki/Unit_vector
The normalized vector û of a non-zero vector u is the unit vector in the direction of u, i.e., $$ \mathbf{\hat{u}} = \frac{\mathbf{u}}{\|\mathbf{u}\|}=(\frac{u_1}{\|\mathbf{u}\|}, \frac{u_2}{\|\mathbf{u}\|}, ... , \frac{u_n}{\|\mathbf{u}\|}) $$ where ‖u‖ is the norm (or length) of u and $$ \|\mathbf{u}\| = (u_1, u_2, ..., u_n) $$ . The proof is the following: $$ \|\mathbf{\hat{u}}\|=\sqrt{\frac{u_1}{\sqrt{u_1^2+...+u_n^2}}^2+...+\frac{u_n}{\sqrt{u_1^2+...+u_n^2}}^2}=\sqrt{\frac{u_1^2+...+u_n^2}{u_1^2+...+u_n^2}}=\sqrt{1}=1 $$ A unit vector is often used to represent directions, such as normal directions. Unit vectors are often chosen to form the basis of a vector space, and every vector in the space may be written as a linear combination form of unit vectors.
https://en.wikipedia.org/wiki/Unit_vector
The proof is the following: $$ \|\mathbf{\hat{u}}\|=\sqrt{\frac{u_1}{\sqrt{u_1^2+...+u_n^2}}^2+...+\frac{u_n}{\sqrt{u_1^2+...+u_n^2}}^2}=\sqrt{\frac{u_1^2+...+u_n^2}{u_1^2+...+u_n^2}}=\sqrt{1}=1 $$ A unit vector is often used to represent directions, such as normal directions. Unit vectors are often chosen to form the basis of a vector space, and every vector in the space may be written as a linear combination form of unit vectors. ## Orthogonal coordinates ### Cartesian coordinates Unit vectors may be used to represent the axes of a Cartesian coordinate system.
https://en.wikipedia.org/wiki/Unit_vector
## Orthogonal coordinates ### Cartesian coordinates Unit vectors may be used to represent the axes of a Cartesian coordinate system. For instance, the standard unit vectors in the direction of the x, y, and z axes of a three dimensional Cartesian coordinate system are $$ \mathbf{\hat{x}} = \begin{bmatrix}1\\0\\0\end{bmatrix}, \,\, \mathbf{\hat{y}} = \begin{bmatrix}0\\1\\0\end{bmatrix}, \,\, \mathbf{\hat{z}} = \begin{bmatrix}0\\0\\1\end{bmatrix} $$ They form a set of mutually orthogonal unit vectors, typically referred to as a standard basis in linear algebra. They are often denoted using common vector notation (e.g., x or $$ \vec{x} $$ ) rather than standard unit vector notation (e.g., x̂). In most contexts it can be assumed that x, y, and z, (or $$ \vec{x}, $$ $$ \vec{y}, $$ and $$ \vec{z} $$ ) are versors of a 3-D Cartesian coordinate system.
https://en.wikipedia.org/wiki/Unit_vector
They are often denoted using common vector notation (e.g., x or $$ \vec{x} $$ ) rather than standard unit vector notation (e.g., x̂). In most contexts it can be assumed that x, y, and z, (or $$ \vec{x}, $$ $$ \vec{y}, $$ and $$ \vec{z} $$ ) are versors of a 3-D Cartesian coordinate system. The notations (î, ĵ, k̂), (x̂1, x̂2, x̂3), (êx, êy, êz), or (ê1, ê2, ê3), with or without hat, are also used, particularly in contexts where i, j, k might lead to confusion with another quantity (for instance with index symbols such as i, j, k, which are used to identify an element of a set or array or sequence of variables). When a unit vector in space is expressed in Cartesian notation as a linear combination of x, y, z, its three scalar components can be referred to as direction cosines. The value of each component is equal to the cosine of the angle formed by the unit vector with the respective basis vector. This is one of the methods used to describe the orientation (angular position) of a straight line, segment of straight line, oriented axis, or segment of oriented axis (vector).
https://en.wikipedia.org/wiki/Unit_vector
The value of each component is equal to the cosine of the angle formed by the unit vector with the respective basis vector. This is one of the methods used to describe the orientation (angular position) of a straight line, segment of straight line, oriented axis, or segment of oriented axis (vector). ### Cylindrical coordinates The three orthogonal unit vectors appropriate to cylindrical symmetry are: - $$ \boldsymbol{\hat{\rho}} $$ (also designated $$ \mathbf{\hat{e}} $$ or $$ \boldsymbol{\hat s} $$ ), representing the direction along which the distance of the point from the axis of symmetry is measured; - $$ \boldsymbol{\hat \varphi} $$ , representing the direction of the motion that would be observed if the point were rotating counterclockwise about the symmetry axis; - $$ \mathbf{\hat{z}} $$ , representing the direction of the symmetry axis; They are related to the Cartesian basis $$ \hat{x} $$ , $$ \hat{y} $$ , $$ \hat{z} $$
https://en.wikipedia.org/wiki/Unit_vector
This is one of the methods used to describe the orientation (angular position) of a straight line, segment of straight line, oriented axis, or segment of oriented axis (vector). ### Cylindrical coordinates The three orthogonal unit vectors appropriate to cylindrical symmetry are: - $$ \boldsymbol{\hat{\rho}} $$ (also designated $$ \mathbf{\hat{e}} $$ or $$ \boldsymbol{\hat s} $$ ), representing the direction along which the distance of the point from the axis of symmetry is measured; - $$ \boldsymbol{\hat \varphi} $$ , representing the direction of the motion that would be observed if the point were rotating counterclockwise about the symmetry axis; - $$ \mathbf{\hat{z}} $$ , representing the direction of the symmetry axis; They are related to the Cartesian basis $$ \hat{x} $$ , $$ \hat{y} $$ , $$ \hat{z} $$ by: $$ \boldsymbol{\hat{\rho}} = \cos(\varphi)\mathbf{\hat{x}} + \sin(\varphi)\mathbf{\hat{y}} $$ $$ \boldsymbol{\hat \varphi} = -\sin(\varphi) \mathbf{\hat{x}} + \cos(\varphi) \mathbf{\hat{y}} $$ $$ \mathbf{\hat{z}} = \mathbf{\hat{z}}. $$ The vectors $$ \boldsymbol{\hat{\rho}} $$ and $$ \boldsymbol{\hat \varphi} $$ are functions of $$ \varphi, $$ and are not constant in direction.
https://en.wikipedia.org/wiki/Unit_vector
### Cylindrical coordinates The three orthogonal unit vectors appropriate to cylindrical symmetry are: - $$ \boldsymbol{\hat{\rho}} $$ (also designated $$ \mathbf{\hat{e}} $$ or $$ \boldsymbol{\hat s} $$ ), representing the direction along which the distance of the point from the axis of symmetry is measured; - $$ \boldsymbol{\hat \varphi} $$ , representing the direction of the motion that would be observed if the point were rotating counterclockwise about the symmetry axis; - $$ \mathbf{\hat{z}} $$ , representing the direction of the symmetry axis; They are related to the Cartesian basis $$ \hat{x} $$ , $$ \hat{y} $$ , $$ \hat{z} $$ by: $$ \boldsymbol{\hat{\rho}} = \cos(\varphi)\mathbf{\hat{x}} + \sin(\varphi)\mathbf{\hat{y}} $$ $$ \boldsymbol{\hat \varphi} = -\sin(\varphi) \mathbf{\hat{x}} + \cos(\varphi) \mathbf{\hat{y}} $$ $$ \mathbf{\hat{z}} = \mathbf{\hat{z}}. $$ The vectors $$ \boldsymbol{\hat{\rho}} $$ and $$ \boldsymbol{\hat \varphi} $$ are functions of $$ \varphi, $$ and are not constant in direction. When differentiating or integrating in cylindrical coordinates, these unit vectors themselves must also be operated on.
https://en.wikipedia.org/wiki/Unit_vector
by: $$ \boldsymbol{\hat{\rho}} = \cos(\varphi)\mathbf{\hat{x}} + \sin(\varphi)\mathbf{\hat{y}} $$ $$ \boldsymbol{\hat \varphi} = -\sin(\varphi) \mathbf{\hat{x}} + \cos(\varphi) \mathbf{\hat{y}} $$ $$ \mathbf{\hat{z}} = \mathbf{\hat{z}}. $$ The vectors $$ \boldsymbol{\hat{\rho}} $$ and $$ \boldsymbol{\hat \varphi} $$ are functions of $$ \varphi, $$ and are not constant in direction. When differentiating or integrating in cylindrical coordinates, these unit vectors themselves must also be operated on. The derivatives with respect to $$ \varphi $$ are: $$ \frac{\partial \boldsymbol{\hat{\rho}}} {\partial \varphi} = -\sin \varphi\mathbf{\hat{x}} + \cos \varphi\mathbf{\hat{y}} = \boldsymbol{\hat \varphi} $$ $$ \frac{\partial \boldsymbol{\hat \varphi}} {\partial \varphi} = -\cos \varphi\mathbf{\hat{x}} - \sin \varphi\mathbf{\hat{y}} = -\boldsymbol{\hat{\rho}} $$ $$ \frac{\partial \mathbf{\hat{z}}} {\partial \varphi} = \mathbf{0}. $$
https://en.wikipedia.org/wiki/Unit_vector
When differentiating or integrating in cylindrical coordinates, these unit vectors themselves must also be operated on. The derivatives with respect to $$ \varphi $$ are: $$ \frac{\partial \boldsymbol{\hat{\rho}}} {\partial \varphi} = -\sin \varphi\mathbf{\hat{x}} + \cos \varphi\mathbf{\hat{y}} = \boldsymbol{\hat \varphi} $$ $$ \frac{\partial \boldsymbol{\hat \varphi}} {\partial \varphi} = -\cos \varphi\mathbf{\hat{x}} - \sin \varphi\mathbf{\hat{y}} = -\boldsymbol{\hat{\rho}} $$ $$ \frac{\partial \mathbf{\hat{z}}} {\partial \varphi} = \mathbf{0}. $$ ### Spherical coordinates The unit vectors appropriate to spherical symmetry are: $$ \mathbf{\hat{r}} $$ , the direction in which the radial distance from the origin increases; $$ \boldsymbol{\hat{\varphi}} $$ , the direction in which the angle in the x-y plane counterclockwise from the positive x-axis is increasing; and $$ \boldsymbol{\hat \theta} $$ , the direction in which the angle from the positive z axis is increasing.
https://en.wikipedia.org/wiki/Unit_vector
The derivatives with respect to $$ \varphi $$ are: $$ \frac{\partial \boldsymbol{\hat{\rho}}} {\partial \varphi} = -\sin \varphi\mathbf{\hat{x}} + \cos \varphi\mathbf{\hat{y}} = \boldsymbol{\hat \varphi} $$ $$ \frac{\partial \boldsymbol{\hat \varphi}} {\partial \varphi} = -\cos \varphi\mathbf{\hat{x}} - \sin \varphi\mathbf{\hat{y}} = -\boldsymbol{\hat{\rho}} $$ $$ \frac{\partial \mathbf{\hat{z}}} {\partial \varphi} = \mathbf{0}. $$ ### Spherical coordinates The unit vectors appropriate to spherical symmetry are: $$ \mathbf{\hat{r}} $$ , the direction in which the radial distance from the origin increases; $$ \boldsymbol{\hat{\varphi}} $$ , the direction in which the angle in the x-y plane counterclockwise from the positive x-axis is increasing; and $$ \boldsymbol{\hat \theta} $$ , the direction in which the angle from the positive z axis is increasing. To minimize redundancy of representations, the polar angle $$ \theta $$ is usually taken to lie between zero and 180 degrees.
https://en.wikipedia.org/wiki/Unit_vector
### Spherical coordinates The unit vectors appropriate to spherical symmetry are: $$ \mathbf{\hat{r}} $$ , the direction in which the radial distance from the origin increases; $$ \boldsymbol{\hat{\varphi}} $$ , the direction in which the angle in the x-y plane counterclockwise from the positive x-axis is increasing; and $$ \boldsymbol{\hat \theta} $$ , the direction in which the angle from the positive z axis is increasing. To minimize redundancy of representations, the polar angle $$ \theta $$ is usually taken to lie between zero and 180 degrees. It is especially important to note the context of any ordered triplet written in spherical coordinates, as the roles of $$ \boldsymbol{\hat \varphi} $$ and $$ \boldsymbol{\hat \theta} $$ are often reversed. Here, the American "physics" convention is used. This leaves the azimuthal angle $$ \varphi $$ defined the same as in cylindrical coordinates.
https://en.wikipedia.org/wiki/Unit_vector
Here, the American "physics" convention is used. This leaves the azimuthal angle $$ \varphi $$ defined the same as in cylindrical coordinates. The Cartesian relations are: $$ \mathbf{\hat{r}} = \sin \theta \cos \varphi\mathbf{\hat{x}} + \sin \theta \sin \varphi\mathbf{\hat{y}} + \cos \theta\mathbf{\hat{z}} $$ $$ \boldsymbol{\hat \theta} = \cos \theta \cos \varphi\mathbf{\hat{x}} + \cos \theta \sin \varphi\mathbf{\hat{y}} - \sin \theta\mathbf{\hat{z}} $$ $$ \boldsymbol{\hat \varphi} = - \sin \varphi\mathbf{\hat{x}} + \cos \varphi\mathbf{\hat{y}} $$ The spherical unit vectors depend on both $$ \varphi $$ and $$ \theta $$ , and hence there are 5 possible non-zero derivatives. For a more complete description, see Jacobian matrix and determinant.
https://en.wikipedia.org/wiki/Unit_vector
The non-zero derivatives are: $$ \frac{\partial \mathbf{\hat{r}}} {\partial \varphi} = -\sin \theta \sin \varphi\mathbf{\hat{x}} + \sin \theta \cos \varphi\mathbf{\hat{y}} = \sin \theta\boldsymbol{\hat \varphi} $$ $$ \frac{\partial \mathbf{\hat{r}}} {\partial \theta} =\cos \theta \cos \varphi\mathbf{\hat{x}} + \cos \theta \sin \varphi\mathbf{\hat{y}} - \sin \theta\mathbf{\hat{z}}= \boldsymbol{\hat \theta} $$ $$ \frac{\partial \boldsymbol{\hat{\theta}}} {\partial \varphi} =-\cos \theta \sin \varphi\mathbf{\hat{x}} + \cos \theta \cos \varphi\mathbf{\hat{y}} = \cos \theta\boldsymbol{\hat \varphi} $$ $$ \frac{\partial \boldsymbol{\hat{\theta}}} {\partial \theta} = -\sin \theta \cos \varphi\mathbf{\hat{x}} - \sin \theta \sin \varphi\mathbf{\hat{y}} - \cos \theta\mathbf{\hat{z}} = -\mathbf{\hat{r}} $$ $$ \frac{\partial \boldsymbol{\hat{\varphi}}} {\partial \varphi} = -\cos \varphi\mathbf{\hat{x}} - \sin \varphi\mathbf{\hat{y}} = -\sin \theta\mathbf{\hat{r}} -\cos \theta\boldsymbol{\hat{\theta}} $$
https://en.wikipedia.org/wiki/Unit_vector
### General unit vectors Common themes of unit vectors occur throughout physics and geometry: Unit vector Nomenclature Diagram Tangent vector to a curve/flux line A normal vector to the plane containing and defined by the radial position vector and angular tangential direction of rotation is necessary so that the vector equations of angular motion hold. Normal to a surface tangent plane/plane containing radial position component and angular tangential component In terms of polar coordinates; Binormal vector to tangent and normal Parallel to some axis/line One unit vector aligned parallel to a principal direction (red line), and a perpendicular unit vector is in any radial direction relative to the principal line. Perpendicular to some axis/line in some radial direction Possible angular deviation relative to some axis/line Unit vector at acute deviation angle φ (including 0 or π/2 rad) relative to a principal direction. ## Curvilinear coordinates In general, a coordinate system may be uniquely specified using a number of linearly independent unit vectors $$ \mathbf{\hat{e}}_n $$ (the actual number being equal to the degrees of freedom of the space).
https://en.wikipedia.org/wiki/Unit_vector
Perpendicular to some axis/line in some radial direction Possible angular deviation relative to some axis/line Unit vector at acute deviation angle φ (including 0 or π/2 rad) relative to a principal direction. ## Curvilinear coordinates In general, a coordinate system may be uniquely specified using a number of linearly independent unit vectors $$ \mathbf{\hat{e}}_n $$ (the actual number being equal to the degrees of freedom of the space). For ordinary 3-space, these vectors may be denoted $$ \mathbf{\hat{e}}_1, \mathbf{\hat{e}}_2, \mathbf{\hat{e}}_3 $$ .
https://en.wikipedia.org/wiki/Unit_vector
## Curvilinear coordinates In general, a coordinate system may be uniquely specified using a number of linearly independent unit vectors $$ \mathbf{\hat{e}}_n $$ (the actual number being equal to the degrees of freedom of the space). For ordinary 3-space, these vectors may be denoted $$ \mathbf{\hat{e}}_1, \mathbf{\hat{e}}_2, \mathbf{\hat{e}}_3 $$ . It is nearly always convenient to define the system to be orthonormal and right-handed: $$ \mathbf{\hat{e}}_i \cdot \mathbf{\hat{e}}_j = \delta_{ij} $$ $$ \mathbf{\hat{e}}_i \cdot (\mathbf{\hat{e}}_j \times \mathbf{\hat{e}}_k) = \varepsilon_{ijk} $$ where $$ \delta_{ij} $$ is the Kronecker delta (which is 1 for i = j, and 0 otherwise) and $$ \varepsilon_{ijk} $$ is the Levi-Civita symbol (which is 1 for permutations ordered as ijk, and −1 for permutations ordered as kji). ##
https://en.wikipedia.org/wiki/Unit_vector
It is nearly always convenient to define the system to be orthonormal and right-handed: $$ \mathbf{\hat{e}}_i \cdot \mathbf{\hat{e}}_j = \delta_{ij} $$ $$ \mathbf{\hat{e}}_i \cdot (\mathbf{\hat{e}}_j \times \mathbf{\hat{e}}_k) = \varepsilon_{ijk} $$ where $$ \delta_{ij} $$ is the Kronecker delta (which is 1 for i = j, and 0 otherwise) and $$ \varepsilon_{ijk} $$ is the Levi-Civita symbol (which is 1 for permutations ordered as ijk, and −1 for permutations ordered as kji). ## Right versor A unit vector in $$ \mathbb{R}^3 $$ was called a right versor by W. R. Hamilton, as he developed his quaternions $$ \mathbb{H} \subset \mathbb{R}^4 $$ .
https://en.wikipedia.org/wiki/Unit_vector
## Right versor A unit vector in $$ \mathbb{R}^3 $$ was called a right versor by W. R. Hamilton, as he developed his quaternions $$ \mathbb{H} \subset \mathbb{R}^4 $$ . In fact, he was the originator of the term vector, as every quaternion $$ q = s + v $$ has a scalar part s and a vector part v. If v is a unit vector in $$ \mathbb{R}^3 $$ , then the square of v in quaternions is −1. Thus by Euler's formula, $$ \exp (\theta v) = \cos \theta + v \sin \theta $$ is a versor in the 3-sphere. When θ is a right angle, the versor is a right versor: its scalar part is zero and its vector part v is a unit vector in $$ \mathbb{R}^3 $$ . Thus the right versors extend the notion of imaginary units found in the complex plane, where the right versors now range over the 2-sphere $$ \mathbb{S}^2 \subset \mathbb{R}^3 \subset \mathbb{H} $$ rather than the pair } in the complex plane.
https://en.wikipedia.org/wiki/Unit_vector
When θ is a right angle, the versor is a right versor: its scalar part is zero and its vector part v is a unit vector in $$ \mathbb{R}^3 $$ . Thus the right versors extend the notion of imaginary units found in the complex plane, where the right versors now range over the 2-sphere $$ \mathbb{S}^2 \subset \mathbb{R}^3 \subset \mathbb{H} $$ rather than the pair } in the complex plane. By extension, a right quaternion is a real multiple of a right versor.
https://en.wikipedia.org/wiki/Unit_vector
In theoretical physics and mathematical physics, analytical mechanics, or theoretical mechanics is a collection of closely related formulations of classical mechanics. Analytical mechanics uses scalar properties of motion representing the system as a whole—usually its kinetic energy and potential energy. The equations of motion are derived from the scalar quantity by some underlying principle about the scalar's variation. Analytical mechanics was developed by many scientists and mathematicians during the 18th century and onward, after Newtonian mechanics. Newtonian mechanics considers vector quantities of motion, particularly accelerations, momenta, forces, of the constituents of the system; it can also be called vectorial mechanics. A scalar is a quantity, whereas a vector is represented by quantity and direction. The results of these two different approaches are equivalent, but the analytical mechanics approach has many advantages for complex problems. Analytical mechanics takes advantage of a system's constraints to solve problems. The constraints limit the degrees of freedom the system can have, and can be used to reduce the number of coordinates needed to solve for the motion. The formalism is well suited to arbitrary choices of coordinates, known in the context as generalized coordinates.
https://en.wikipedia.org/wiki/Analytical_mechanics
The constraints limit the degrees of freedom the system can have, and can be used to reduce the number of coordinates needed to solve for the motion. The formalism is well suited to arbitrary choices of coordinates, known in the context as generalized coordinates. The kinetic and potential energies of the system are expressed using these generalized coordinates or momenta, and the equations of motion can be readily set up, thus analytical mechanics allows numerous mechanical problems to be solved with greater efficiency than fully vectorial methods. It does not always work for non-conservative forces or dissipative forces like friction, in which case one may revert to Newtonian mechanics. Two dominant branches of analytical mechanics are ## Lagrangian mechanics (using generalized coordinates and corresponding generalized velocities in configuration space) and ## Hamiltonian mechanics (using coordinates and corresponding momenta in phase space). Both formulations are equivalent by a Legendre transformation on the generalized coordinates, velocities and momenta; therefore, both contain the same information for describing the dynamics of a system. There are other formulations such as Hamilton–Jacobi theory, ## Routhian mechanics , and Appell's equation of motion. All equations of motion for particles and fields, in any formalism, can be derived from the widely applicable result called the principle of least action.
https://en.wikipedia.org/wiki/Analytical_mechanics
## Routhian mechanics , and Appell's equation of motion. All equations of motion for particles and fields, in any formalism, can be derived from the widely applicable result called the principle of least action. One result is Noether's theorem, a statement which connects conservation laws to their associated symmetries. Analytical mechanics does not introduce new physics and is not more general than Newtonian mechanics. Rather it is a collection of equivalent formalisms which have broad application. In fact the same principles and formalisms can be used in relativistic mechanics and general relativity, and with some modifications, quantum mechanics and quantum field theory. Analytical mechanics is used widely, from fundamental physics to applied mathematics, particularly chaos theory. The methods of analytical mechanics apply to discrete particles, each with a finite number of degrees of freedom. They can be modified to describe continuous fields or fluids, which have infinite degrees of freedom. The definitions and equations have a close analogy with those of mechanics. ## Motivation The goal of mechanical theory is to solve mechanical problems, such as arise in physics and engineering. Starting from a physical system—such as a mechanism or a star system—a mathematical model is developed in the form of a differential equation.
https://en.wikipedia.org/wiki/Analytical_mechanics
## Motivation The goal of mechanical theory is to solve mechanical problems, such as arise in physics and engineering. Starting from a physical system—such as a mechanism or a star system—a mathematical model is developed in the form of a differential equation. The model can be solved numerically or analytically to determine the motion of the system. Newton's vectorial approach to mechanics describes motion with the help of vector quantities such as force, velocity, acceleration. These quantities characterise the motion of a body idealised as a "mass point" or a "particle" understood as a single point to which a mass is attached. Newton's method has been successfully applied to a wide range of physical problems, including the motion of a particle in Earth's gravitational field and the motion of planets around the Sun. In this approach, Newton's laws describe the motion by a differential equation and then the problem is reduced to the solving of that equation. When a mechanical system contains many particles, however (such as a complex mechanism or a fluid), Newton's approach is difficult to apply. Using a Newtonian approach is possible, under proper precautions, namely isolating each single particle from the others, and determining all the forces acting on it. Such analysis is cumbersome even in relatively simple systems. Newton thought that his third law "action equals reaction" would take care of all complications.
https://en.wikipedia.org/wiki/Analytical_mechanics
Such analysis is cumbersome even in relatively simple systems. Newton thought that his third law "action equals reaction" would take care of all complications. This is false even for such simple system as rotations of a solid body. In more complicated systems, the vectorial approach cannot give an adequate description. The analytical approach simplifies problems by treating mechanical systems as ensembles of particles that interact with each other, rather considering each particle as an isolated unit. In the vectorial approach, forces must be determined individually for each particle, whereas in the analytical approach it is enough to know one single function which contains implicitly all the forces acting on and in the system. Such simplification is often done using certain kinematic conditions which are stated a priori. However, the analytical treatment does not require the knowledge of these forces and takes these kinematic conditions for granted. Still, deriving the equations of motion of a complicated mechanical system requires a unifying basis from which they follow. This is provided by various variational principles: behind each set of equations there is a principle that expresses the meaning of the entire set. Given a fundamental and universal quantity called action, the principle that this action be stationary under small variation of some other mechanical quantity generates the required set of differential equations.
https://en.wikipedia.org/wiki/Analytical_mechanics
This is provided by various variational principles: behind each set of equations there is a principle that expresses the meaning of the entire set. Given a fundamental and universal quantity called action, the principle that this action be stationary under small variation of some other mechanical quantity generates the required set of differential equations. The statement of the principle does not require any special coordinate system, and all results are expressed in generalized coordinates. This means that the analytical equations of motion do not change upon a coordinate transformation, an invariance property that is lacking in the vectorial equations of motion. It is not altogether clear what is meant by 'solving' a set of differential equations. A problem is regarded as solved when the particles coordinates at time t are expressed as simple functions of t and of parameters defining the initial positions and velocities. However, 'simple function' is not a well-defined concept: nowadays, a function f(t) is not regarded as a formal expression in t (elementary function) as in the time of Newton but most generally as a quantity determined by t, and it is not possible to draw a sharp line between 'simple' and 'not simple' functions.
https://en.wikipedia.org/wiki/Analytical_mechanics
A problem is regarded as solved when the particles coordinates at time t are expressed as simple functions of t and of parameters defining the initial positions and velocities. However, 'simple function' is not a well-defined concept: nowadays, a function f(t) is not regarded as a formal expression in t (elementary function) as in the time of Newton but most generally as a quantity determined by t, and it is not possible to draw a sharp line between 'simple' and 'not simple' functions. If one speaks merely of 'functions', then every mechanical problem is solved as soon as it has been well stated in differential equations, because given the initial conditions and t determine the coordinates at t. This is a fact especially at present with the modern methods of computer modelling which provide arithmetical solutions to mechanical problems to any desired degree of accuracy, the differential equations being replaced by difference equations. Still, though lacking precise definitions, it is obvious that the two-body problem has a simple solution, whereas the three-body problem has not. The two-body problem is solved by formulas involving parameters; their values can be changed to study the class of all solutions, that is, the mathematical structure of the problem.
https://en.wikipedia.org/wiki/Analytical_mechanics
Still, though lacking precise definitions, it is obvious that the two-body problem has a simple solution, whereas the three-body problem has not. The two-body problem is solved by formulas involving parameters; their values can be changed to study the class of all solutions, that is, the mathematical structure of the problem. Moreover, an accurate mental or drawn picture can be made for the motion of two bodies, and it can be as real and accurate as the real bodies moving and interacting. In the three-body problem, parameters can also be assigned specific values; however, the solution at these assigned values or a collection of such solutions does not reveal the mathematical structure of the problem. As in many other problems, the mathematical structure can be elucidated only by examining the differential equations themselves. Analytical mechanics aims at even more: not at understanding the mathematical structure of a single mechanical problem, but that of a class of problems so wide that they encompass most of mechanics. It concentrates on systems to which Lagrangian or Hamiltonian equations of motion are applicable and that include a very wide range of problems indeed. Development of analytical mechanics has two objectives: (i) increase the range of solvable problems by developing standard techniques with a wide range of applicability, and (ii) understand the mathematical structure of mechanics.
https://en.wikipedia.org/wiki/Analytical_mechanics
It concentrates on systems to which Lagrangian or Hamiltonian equations of motion are applicable and that include a very wide range of problems indeed. Development of analytical mechanics has two objectives: (i) increase the range of solvable problems by developing standard techniques with a wide range of applicability, and (ii) understand the mathematical structure of mechanics. In the long run, however, (ii) can help (i) more than a concentration on specific problems for which methods have already been designed. ## Intrinsic motion ### Generalized coordinates and constraints In Newtonian mechanics, one customarily uses all three Cartesian coordinates, or other 3D coordinate system, to refer to a body's position during its motion. In physical systems, however, some structure or other system usually constrains the body's motion from taking certain directions and pathways. So a full set of Cartesian coordinates is often unneeded, as the constraints determine the evolving relations among the coordinates, which relations can be modeled by equations corresponding to the constraints. In the Lagrangian and Hamiltonian formalisms, the constraints are incorporated into the motion's geometry, reducing the number of coordinates to the minimum needed to model the motion. These are known as generalized coordinates, denoted qi (i = 1, 2, 3...).
https://en.wikipedia.org/wiki/Analytical_mechanics
In the Lagrangian and Hamiltonian formalisms, the constraints are incorporated into the motion's geometry, reducing the number of coordinates to the minimum needed to model the motion. These are known as generalized coordinates, denoted qi (i = 1, 2, 3...). ### Difference between curvillinear and generalized coordinates Generalized coordinates incorporate constraints on the system. There is one generalized coordinate qi for each degree of freedom (for convenience labelled by an index i = 1, 2...N), i.e. each way the system can change its configuration; as curvilinear lengths or angles of rotation. Generalized coordinates are not the same as curvilinear coordinates. The number of curvilinear coordinates equals the dimension of the position space in question (usually 3 for 3d space), while the number of generalized coordinates is not necessarily equal to this dimension; constraints can reduce the number of degrees of freedom (hence the number of generalized coordinates required to define the configuration of the system), following the general rule:
https://en.wikipedia.org/wiki/Analytical_mechanics
Generalized coordinates are not the same as curvilinear coordinates. The number of curvilinear coordinates equals the dimension of the position space in question (usually 3 for 3d space), while the number of generalized coordinates is not necessarily equal to this dimension; constraints can reduce the number of degrees of freedom (hence the number of generalized coordinates required to define the configuration of the system), following the general rule: For a system with N degrees of freedom, the generalized coordinates can be collected into an N-tuple: $$ \mathbf{q} = (q_1, q_2, \dots, q_N) $$ and the time derivative (here denoted by an overdot) of this tuple give the generalized velocities: $$ \frac{d\mathbf{q}}{dt} = \left(\frac{dq_1}{dt}, \frac{dq_2}{dt}, \dots, \frac{dq_N}{dt}\right) \equiv \mathbf{\dot{q}} = (\dot{q}_1, \dot{q}_2, \dots, \dot{q}_N) . $$
https://en.wikipedia.org/wiki/Analytical_mechanics
The number of curvilinear coordinates equals the dimension of the position space in question (usually 3 for 3d space), while the number of generalized coordinates is not necessarily equal to this dimension; constraints can reduce the number of degrees of freedom (hence the number of generalized coordinates required to define the configuration of the system), following the general rule: For a system with N degrees of freedom, the generalized coordinates can be collected into an N-tuple: $$ \mathbf{q} = (q_1, q_2, \dots, q_N) $$ and the time derivative (here denoted by an overdot) of this tuple give the generalized velocities: $$ \frac{d\mathbf{q}}{dt} = \left(\frac{dq_1}{dt}, \frac{dq_2}{dt}, \dots, \frac{dq_N}{dt}\right) \equiv \mathbf{\dot{q}} = (\dot{q}_1, \dot{q}_2, \dots, \dot{q}_N) . $$ ### D'Alembert's principle of virtual work D'Alembert's principle states that infinitesimal virtual work done by a force across reversible displacements is zero, which is the work done by a force consistent with ideal constraints of the system.
https://en.wikipedia.org/wiki/Analytical_mechanics
For a system with N degrees of freedom, the generalized coordinates can be collected into an N-tuple: $$ \mathbf{q} = (q_1, q_2, \dots, q_N) $$ and the time derivative (here denoted by an overdot) of this tuple give the generalized velocities: $$ \frac{d\mathbf{q}}{dt} = \left(\frac{dq_1}{dt}, \frac{dq_2}{dt}, \dots, \frac{dq_N}{dt}\right) \equiv \mathbf{\dot{q}} = (\dot{q}_1, \dot{q}_2, \dots, \dot{q}_N) . $$ ### D'Alembert's principle of virtual work D'Alembert's principle states that infinitesimal virtual work done by a force across reversible displacements is zero, which is the work done by a force consistent with ideal constraints of the system. The idea of a constraint is useful – since this limits what the system can do, and can provide steps to solving for the motion of the system.
https://en.wikipedia.org/wiki/Analytical_mechanics
### D'Alembert's principle of virtual work D'Alembert's principle states that infinitesimal virtual work done by a force across reversible displacements is zero, which is the work done by a force consistent with ideal constraints of the system. The idea of a constraint is useful – since this limits what the system can do, and can provide steps to solving for the motion of the system. The equation for D'Alembert's principle is: $$ \delta W = \boldsymbol{\mathcal{Q}} \cdot \delta\mathbf{q} = 0 \,, $$ where $$ \boldsymbol\mathcal{Q} = (\mathcal{Q}_1, \mathcal{Q}_2, \dots, \mathcal{Q}_N) $$ are the generalized forces (script Q instead of ordinary Q is used here to prevent conflict with canonical transformations below) and are the generalized coordinates.
https://en.wikipedia.org/wiki/Analytical_mechanics
The idea of a constraint is useful – since this limits what the system can do, and can provide steps to solving for the motion of the system. The equation for D'Alembert's principle is: $$ \delta W = \boldsymbol{\mathcal{Q}} \cdot \delta\mathbf{q} = 0 \,, $$ where $$ \boldsymbol\mathcal{Q} = (\mathcal{Q}_1, \mathcal{Q}_2, \dots, \mathcal{Q}_N) $$ are the generalized forces (script Q instead of ordinary Q is used here to prevent conflict with canonical transformations below) and are the generalized coordinates. This leads to the generalized form of Newton's laws in the language of analytical mechanics: $$ \boldsymbol\mathcal{Q} = \frac{d}{dt} \left ( \frac {\partial T}{\partial \mathbf{\dot{q}}} \right ) - \frac {\partial T}{\partial \mathbf{q}}\,, $$ where T is the total kinetic energy of the system, and the notation $$ \frac {\partial}{\partial \mathbf{q}} = \left(\frac{\partial }{\partial q_1}, \frac{\partial }{\partial q_2}, \dots, \frac{\partial }{\partial q_N}\right) $$ is a useful shorthand (see matrix calculus for this notation).
https://en.wikipedia.org/wiki/Analytical_mechanics
The equation for D'Alembert's principle is: $$ \delta W = \boldsymbol{\mathcal{Q}} \cdot \delta\mathbf{q} = 0 \,, $$ where $$ \boldsymbol\mathcal{Q} = (\mathcal{Q}_1, \mathcal{Q}_2, \dots, \mathcal{Q}_N) $$ are the generalized forces (script Q instead of ordinary Q is used here to prevent conflict with canonical transformations below) and are the generalized coordinates. This leads to the generalized form of Newton's laws in the language of analytical mechanics: $$ \boldsymbol\mathcal{Q} = \frac{d}{dt} \left ( \frac {\partial T}{\partial \mathbf{\dot{q}}} \right ) - \frac {\partial T}{\partial \mathbf{q}}\,, $$ where T is the total kinetic energy of the system, and the notation $$ \frac {\partial}{\partial \mathbf{q}} = \left(\frac{\partial }{\partial q_1}, \frac{\partial }{\partial q_2}, \dots, \frac{\partial }{\partial q_N}\right) $$ is a useful shorthand (see matrix calculus for this notation). ### Constraints If the curvilinear coordinate system is defined by the standard position vector , and if the position vector can be written in terms of the generalized coordinates and time in the form: $$ \mathbf{r} = \mathbf{r}(\mathbf{q}(t),t) $$ and this relation holds for all times , then are called holonomic constraints.
https://en.wikipedia.org/wiki/Analytical_mechanics
This leads to the generalized form of Newton's laws in the language of analytical mechanics: $$ \boldsymbol\mathcal{Q} = \frac{d}{dt} \left ( \frac {\partial T}{\partial \mathbf{\dot{q}}} \right ) - \frac {\partial T}{\partial \mathbf{q}}\,, $$ where T is the total kinetic energy of the system, and the notation $$ \frac {\partial}{\partial \mathbf{q}} = \left(\frac{\partial }{\partial q_1}, \frac{\partial }{\partial q_2}, \dots, \frac{\partial }{\partial q_N}\right) $$ is a useful shorthand (see matrix calculus for this notation). ### Constraints If the curvilinear coordinate system is defined by the standard position vector , and if the position vector can be written in terms of the generalized coordinates and time in the form: $$ \mathbf{r} = \mathbf{r}(\mathbf{q}(t),t) $$ and this relation holds for all times , then are called holonomic constraints. Vector is explicitly dependent on in cases when the constraints vary with time, not just because of .
https://en.wikipedia.org/wiki/Analytical_mechanics
### Constraints If the curvilinear coordinate system is defined by the standard position vector , and if the position vector can be written in terms of the generalized coordinates and time in the form: $$ \mathbf{r} = \mathbf{r}(\mathbf{q}(t),t) $$ and this relation holds for all times , then are called holonomic constraints. Vector is explicitly dependent on in cases when the constraints vary with time, not just because of . For time-independent situations, the constraints are also called scleronomic, for time-dependent cases they are called rheonomic.
https://en.wikipedia.org/wiki/Analytical_mechanics
Vector is explicitly dependent on in cases when the constraints vary with time, not just because of . For time-independent situations, the constraints are also called scleronomic, for time-dependent cases they are called rheonomic. Lagrangian mechanics The introduction of generalized coordinates and the fundamental Lagrangian function: $$ L(\mathbf{q},\mathbf{\dot{q}},t) = T(\mathbf{q},\mathbf{\dot{q}},t) - V(\mathbf{q},\mathbf{\dot{q}},t) $$ where T is the total kinetic energy and V is the total potential energy of the entire system, then either following the calculus of variations or using the above formula – lead to the Euler–Lagrange equations; $$ \frac{d}{dt}\left(\frac{\partial L}{\partial \mathbf{\dot{q}}}\right) = \frac{\partial L}{\partial \mathbf{q}} \,, $$ which are a set of N second-order ordinary differential equations, one for each qi(t).
https://en.wikipedia.org/wiki/Analytical_mechanics
For time-independent situations, the constraints are also called scleronomic, for time-dependent cases they are called rheonomic. Lagrangian mechanics The introduction of generalized coordinates and the fundamental Lagrangian function: $$ L(\mathbf{q},\mathbf{\dot{q}},t) = T(\mathbf{q},\mathbf{\dot{q}},t) - V(\mathbf{q},\mathbf{\dot{q}},t) $$ where T is the total kinetic energy and V is the total potential energy of the entire system, then either following the calculus of variations or using the above formula – lead to the Euler–Lagrange equations; $$ \frac{d}{dt}\left(\frac{\partial L}{\partial \mathbf{\dot{q}}}\right) = \frac{\partial L}{\partial \mathbf{q}} \,, $$ which are a set of N second-order ordinary differential equations, one for each qi(t). This formulation identifies the actual path followed by the motion as a selection of the path over which the time integral of kinetic energy is least, assuming the total energy to be fixed, and imposing no conditions on the time of transit.
https://en.wikipedia.org/wiki/Analytical_mechanics
Lagrangian mechanics The introduction of generalized coordinates and the fundamental Lagrangian function: $$ L(\mathbf{q},\mathbf{\dot{q}},t) = T(\mathbf{q},\mathbf{\dot{q}},t) - V(\mathbf{q},\mathbf{\dot{q}},t) $$ where T is the total kinetic energy and V is the total potential energy of the entire system, then either following the calculus of variations or using the above formula – lead to the Euler–Lagrange equations; $$ \frac{d}{dt}\left(\frac{\partial L}{\partial \mathbf{\dot{q}}}\right) = \frac{\partial L}{\partial \mathbf{q}} \,, $$ which are a set of N second-order ordinary differential equations, one for each qi(t). This formulation identifies the actual path followed by the motion as a selection of the path over which the time integral of kinetic energy is least, assuming the total energy to be fixed, and imposing no conditions on the time of transit. The Lagrangian formulation uses the configuration space of the system, the set of all possible generalized coordinates: $$ \mathcal{C} = \{ \mathbf{q} \in \mathbb{R}^N \}\,, $$ where $$ \mathbb{R}^N $$ is N-dimensional real space (see also set-builder notation).
https://en.wikipedia.org/wiki/Analytical_mechanics
This formulation identifies the actual path followed by the motion as a selection of the path over which the time integral of kinetic energy is least, assuming the total energy to be fixed, and imposing no conditions on the time of transit. The Lagrangian formulation uses the configuration space of the system, the set of all possible generalized coordinates: $$ \mathcal{C} = \{ \mathbf{q} \in \mathbb{R}^N \}\,, $$ where $$ \mathbb{R}^N $$ is N-dimensional real space (see also set-builder notation). The particular solution to the Euler–Lagrange equations is called a (configuration) path or trajectory, i.e. one particular q(t) subject to the required initial conditions. The general solutions form a set of possible configurations as functions of time: $$ \{ \mathbf{q}(t) \in \mathbb{R}^N \,:\,t\ge 0,t\in \mathbb{R}\}\subseteq\mathcal{C}\,, $$ The configuration space can be defined more generally, and indeed more deeply, in terms of topological manifolds and the tangent bundle.
https://en.wikipedia.org/wiki/Analytical_mechanics
Hamiltonian mechanics The Legendre transformation of the Lagrangian replaces the generalized coordinates and velocities (q, q̇) with (q, p); the generalized coordinates and the generalized momenta conjugate to the generalized coordinates: $$ \mathbf{p} = \frac{\partial L}{\partial \mathbf{\dot{q}}} = \left(\frac{\partial L}{\partial \dot{q}_1},\frac{\partial L}{\partial \dot{q}_2},\cdots \frac{\partial L}{\partial \dot{q}_N}\right) = (p_1, p_2\cdots p_N)\,, $$ and introduces the Hamiltonian (which is in terms of generalized coordinates and momenta): $$ H(\mathbf{q},\mathbf{p},t) = \mathbf{p}\cdot\mathbf{\dot{q}} - L(\mathbf{q},\mathbf{\dot{q}},t) $$ where $$ \cdot $$ denotes the dot product, also leading to Hamilton's equations: $$ \mathbf{\dot{p}} = - \frac{\partial H}{\partial \mathbf{q}}\,,\quad \mathbf{\dot{q}} = + \frac{\partial H}{\partial \mathbf{p}} \,, $$ which are now a set of 2N first-order ordinary differential equations, one for each qi(t) and pi(t).
https://en.wikipedia.org/wiki/Analytical_mechanics
Another result from the Legendre transformation relates the time derivatives of the Lagrangian and Hamiltonian: $$ \frac{dH}{dt}=-\frac{\partial L}{\partial t}\,, $$ which is often considered one of Hamilton's equations of motion additionally to the others. The generalized momenta can be written in terms of the generalized forces in the same way as Newton's second law: $$ \mathbf{\dot{p}} = \boldsymbol{\mathcal{Q}}\,. $$ Analogous to the configuration space, the set of all momenta is the generalized momentum space: $$ \mathcal{M} = \{ \mathbf{p}\in\mathbb{R}^N \}\,. $$ ("Momentum space" also refers to "k-space"; the set of all wave vectors (given by De Broglie relations) as used in quantum mechanics and theory of waves) The set of all positions and momenta form the phase space: $$ \mathcal{P} = \mathcal{C}\times\mathcal{M} = \{ (\mathbf{q},\mathbf{p})\in\mathbb{R}^{2N} \} \,, $$ that is, the Cartesian product of the configuration space and generalized momentum space.
https://en.wikipedia.org/wiki/Analytical_mechanics
Another result from the Legendre transformation relates the time derivatives of the Lagrangian and Hamiltonian: $$ \frac{dH}{dt}=-\frac{\partial L}{\partial t}\,, $$ which is often considered one of Hamilton's equations of motion additionally to the others. The generalized momenta can be written in terms of the generalized forces in the same way as Newton's second law: $$ \mathbf{\dot{p}} = \boldsymbol{\mathcal{Q}}\,. $$ Analogous to the configuration space, the set of all momenta is the generalized momentum space: $$ \mathcal{M} = \{ \mathbf{p}\in\mathbb{R}^N \}\,. $$ ("Momentum space" also refers to "k-space"; the set of all wave vectors (given by De Broglie relations) as used in quantum mechanics and theory of waves) The set of all positions and momenta form the phase space: $$ \mathcal{P} = \mathcal{C}\times\mathcal{M} = \{ (\mathbf{q},\mathbf{p})\in\mathbb{R}^{2N} \} \,, $$ that is, the Cartesian product of the configuration space and generalized momentum space. A particular solution to Hamilton's equations is called a phase path, a particular curve (q(t),p(t)) subject to the required initial conditions.
https://en.wikipedia.org/wiki/Analytical_mechanics
The generalized momenta can be written in terms of the generalized forces in the same way as Newton's second law: $$ \mathbf{\dot{p}} = \boldsymbol{\mathcal{Q}}\,. $$ Analogous to the configuration space, the set of all momenta is the generalized momentum space: $$ \mathcal{M} = \{ \mathbf{p}\in\mathbb{R}^N \}\,. $$ ("Momentum space" also refers to "k-space"; the set of all wave vectors (given by De Broglie relations) as used in quantum mechanics and theory of waves) The set of all positions and momenta form the phase space: $$ \mathcal{P} = \mathcal{C}\times\mathcal{M} = \{ (\mathbf{q},\mathbf{p})\in\mathbb{R}^{2N} \} \,, $$ that is, the Cartesian product of the configuration space and generalized momentum space. A particular solution to Hamilton's equations is called a phase path, a particular curve (q(t),p(t)) subject to the required initial conditions. The set of all phase paths, the general solution to the differential equations, is the phase portrait: $$ \{ (\mathbf{q}(t),\mathbf{p}(t))\in\mathbb{R}^{2N}\,:\,t\ge0, t\in\mathbb{R} \} \subseteq \mathcal{P}\,, $$
https://en.wikipedia.org/wiki/Analytical_mechanics
A particular solution to Hamilton's equations is called a phase path, a particular curve (q(t),p(t)) subject to the required initial conditions. The set of all phase paths, the general solution to the differential equations, is the phase portrait: $$ \{ (\mathbf{q}(t),\mathbf{p}(t))\in\mathbb{R}^{2N}\,:\,t\ge0, t\in\mathbb{R} \} \subseteq \mathcal{P}\,, $$ ### The Poisson bracket All dynamical variables can be derived from position q, momentum p, and time t, and written as a function of these: A = A(q, p, t).
https://en.wikipedia.org/wiki/Analytical_mechanics
If A(q, p, t) and B(q, p, t) are two scalar valued dynamical variables, the Poisson bracket is defined by the generalized coordinates and momenta: $$ \begin{align} \{A,B\} \equiv \{A,B\}_{\mathbf{q},\mathbf{p}} & = \frac{\partial A}{\partial \mathbf{q}}\cdot\frac{\partial B}{\partial \mathbf{p}} - \frac{\partial A}{\partial \mathbf{p}}\cdot\frac{\partial B}{\partial \mathbf{q}}\\ & \equiv \sum_k \frac{\partial A}{\partial q_k}\frac{\partial B}{\partial p_k} - \frac{\partial A}{\partial p_k}\frac{\partial B}{\partial q_k}\,, \end{align} $$ Calculating the total derivative of one of these, say A, and substituting Hamilton's equations into the result leads to the time evolution of A: $$ \frac{dA}{dt} = \{A,H\} + \frac{\partial A}{\partial t}\,. $$ This equation in A is closely related to the equation of motion in the Heisenberg picture of quantum mechanics, in which classical dynamical variables become quantum operators (indicated by hats (^)), and the Poisson bracket is replaced by the commutator of operators via Dirac's canonical quantization: $$ \{A,B\} \rightarrow \frac{1}{i\hbar}[\hat{A},\hat{B}]\,. $$
https://en.wikipedia.org/wiki/Analytical_mechanics
## Properties of the Lagrangian and the Hamiltonian Following are overlapping properties between the Lagrangian and Hamiltonian functions. Classical Mechanics, T.W.B. Kibble, European Physics Series, McGraw-Hill (UK), 1973, - All the individual generalized coordinates qi(t), velocities q̇i(t) and momenta pi(t) for every degree of freedom are mutually independent. Explicit time-dependence of a function means the function actually includes time t as a variable in addition to the q(t), p(t), not simply as a parameter through q(t) and p(t), which would mean explicit time-independence. - The Lagrangian is invariant under addition of the total time derivative of any function of q and t, that is: $$ L' = L +\frac{d}{dt}F(\mathbf{q},t) \,, $$ so each Lagrangian L and L describe exactly the same motion. In other words, the Lagrangian of a system is not unique. - Analogously, the Hamiltonian is invariant under addition of the partial time derivative of any function of q, p and t, that is: (K is a frequently used letter in this case).
https://en.wikipedia.org/wiki/Analytical_mechanics
In other words, the Lagrangian of a system is not unique. - Analogously, the Hamiltonian is invariant under addition of the partial time derivative of any function of q, p and t, that is: (K is a frequently used letter in this case). This property is used in canonical transformations (see below). - If the Lagrangian is independent of some generalized coordinates, then the generalized momenta conjugate to those coordinates are constants of the motion, i.e. are conserved, this immediately follows from Lagrange's equations: Such coordinates are "cyclic" or "ignorable". It can be shown that the Hamiltonian is also cyclic in exactly the same generalized coordinates. - If the Lagrangian is time-independent the Hamiltonian is also time-independent (i.e. both are constant in time). - If the kinetic energy is a homogeneous function of degree 2 of the generalized velocities, and the Lagrangian is explicitly time-independent, then: where λ is a constant, then the Hamiltonian will be the total conserved energy, equal to the total kinetic and potential energies of the system: This is the basis for the Schrödinger equation, inserting quantum operators directly obtains it.
https://en.wikipedia.org/wiki/Analytical_mechanics
This property is used in canonical transformations (see below). - If the Lagrangian is independent of some generalized coordinates, then the generalized momenta conjugate to those coordinates are constants of the motion, i.e. are conserved, this immediately follows from Lagrange's equations: Such coordinates are "cyclic" or "ignorable". It can be shown that the Hamiltonian is also cyclic in exactly the same generalized coordinates. - If the Lagrangian is time-independent the Hamiltonian is also time-independent (i.e. both are constant in time). - If the kinetic energy is a homogeneous function of degree 2 of the generalized velocities, and the Lagrangian is explicitly time-independent, then: where λ is a constant, then the Hamiltonian will be the total conserved energy, equal to the total kinetic and potential energies of the system: This is the basis for the Schrödinger equation, inserting quantum operators directly obtains it. ## Principle of least action Action is another quantity in analytical mechanics defined as a functional of the Lagrangian: A general way to find the equations of motion from the action is the principle of least action:Encyclopaedia of Physics (2nd Edition), R.G. Lerner, G.L. Trigg, VHC publishers, 1991, ISBN (Verlagsgesellschaft)
https://en.wikipedia.org/wiki/Analytical_mechanics
It can be shown that the Hamiltonian is also cyclic in exactly the same generalized coordinates. - If the Lagrangian is time-independent the Hamiltonian is also time-independent (i.e. both are constant in time). - If the kinetic energy is a homogeneous function of degree 2 of the generalized velocities, and the Lagrangian is explicitly time-independent, then: where λ is a constant, then the Hamiltonian will be the total conserved energy, equal to the total kinetic and potential energies of the system: This is the basis for the Schrödinger equation, inserting quantum operators directly obtains it. ## Principle of least action Action is another quantity in analytical mechanics defined as a functional of the Lagrangian: A general way to find the equations of motion from the action is the principle of least action:Encyclopaedia of Physics (2nd Edition), R.G. Lerner, G.L. Trigg, VHC publishers, 1991, ISBN (Verlagsgesellschaft) 3-527-26954-1, ISBN (VHC Inc.) 0-89573-752-3 where the departure t1 and arrival t2 times are fixed. The term "path" or "trajectory" refers to the time evolution of the system as a path through configuration space , in other words q(t) tracing out a path in . The path for which action is least is the path taken by the system.
https://en.wikipedia.org/wiki/Analytical_mechanics
The term "path" or "trajectory" refers to the time evolution of the system as a path through configuration space , in other words q(t) tracing out a path in . The path for which action is least is the path taken by the system. From this principle, all equations of motion in classical mechanics can be derived. This approach can be extended to fields rather than a system of particles (see below), and underlies the path integral formulation of quantum mechanics,Quantum Mechanics, E. Abers, Pearson Ed., Addison Wesley, Prentice Hall Inc, 2004, Quantum Field Theory, D. McMahon, Mc Graw Hill (US), 2008, and is used for calculating geodesic motion in general relativity. Relativity, Gravitation, and Cosmology, R.J.A. Lambourne, Open University, Cambridge University Press, 2010,
https://en.wikipedia.org/wiki/Analytical_mechanics
This approach can be extended to fields rather than a system of particles (see below), and underlies the path integral formulation of quantum mechanics,Quantum Mechanics, E. Abers, Pearson Ed., Addison Wesley, Prentice Hall Inc, 2004, Quantum Field Theory, D. McMahon, Mc Graw Hill (US), 2008, and is used for calculating geodesic motion in general relativity. Relativity, Gravitation, and Cosmology, R.J.A. Lambourne, Open University, Cambridge University Press, 2010, ## Hamiltonian-Jacobi mechanics Canonical transformations The invariance of the Hamiltonian (under addition of the partial time derivative of an arbitrary function of p, q, and t) allows the Hamiltonian in one set of coordinates q and momenta p to be transformed into a new set Q = Q(q, p, t) and P = P(q, p, t), in four possible ways: With the restriction on P and Q such that the transformed Hamiltonian system is: the above transformations are called canonical transformations, each function Gn is called a generating function of the "nth kind" or "type-n". The transformation of coordinates and momenta can allow simplification for solving Hamilton's equations for a given problem. The choice of Q and P is completely arbitrary, but not every choice leads to a canonical transformation.
https://en.wikipedia.org/wiki/Analytical_mechanics
The transformation of coordinates and momenta can allow simplification for solving Hamilton's equations for a given problem. The choice of Q and P is completely arbitrary, but not every choice leads to a canonical transformation. One simple criterion for a transformation q → Q and p → P to be canonical is the Poisson bracket be unity, for all i = 1, 2,... N. If this does not hold then the transformation is not canonical. The Hamilton–Jacobi equation By setting the canonically transformed Hamiltonian K = 0, and the type-2 generating function equal to Hamilton's principal function (also the action ) plus an arbitrary constant C: the generalized momenta become: and P is constant, then the Hamiltonian-Jacobi equation (HJE) can be derived from the type-2 canonical transformation: where H is the Hamiltonian as before: Another related function is Hamilton's characteristic functionused to solve the HJE by additive separation of variables for a time-independent Hamiltonian H. The study of the solutions of the Hamilton–Jacobi equations leads naturally to the study of symplectic manifolds and symplectic topology. In this formulation, the solutions of the Hamilton–Jacobi equations are the integral curves of Hamiltonian vector fields.
https://en.wikipedia.org/wiki/Analytical_mechanics
The Hamilton–Jacobi equation By setting the canonically transformed Hamiltonian K = 0, and the type-2 generating function equal to Hamilton's principal function (also the action ) plus an arbitrary constant C: the generalized momenta become: and P is constant, then the Hamiltonian-Jacobi equation (HJE) can be derived from the type-2 canonical transformation: where H is the Hamiltonian as before: Another related function is Hamilton's characteristic functionused to solve the HJE by additive separation of variables for a time-independent Hamiltonian H. The study of the solutions of the Hamilton–Jacobi equations leads naturally to the study of symplectic manifolds and symplectic topology. In this formulation, the solutions of the Hamilton–Jacobi equations are the integral curves of Hamiltonian vector fields. Routhian mechanics Routhian mechanics is a hybrid formulation of Lagrangian and Hamiltonian mechanics, not often used but especially useful for removing cyclic coordinates. If the Lagrangian of a system has s cyclic coordinates q = q1, q2, ...
https://en.wikipedia.org/wiki/Analytical_mechanics
Routhian mechanics Routhian mechanics is a hybrid formulation of Lagrangian and Hamiltonian mechanics, not often used but especially useful for removing cyclic coordinates. If the Lagrangian of a system has s cyclic coordinates q = q1, q2, ... qs with conjugate momenta p = p1, p2, ... ps, with the rest of the coordinates non-cyclic and denoted ζ = ζ1, ζ1, ..., ζN − s, they can be removed by introducing the Routhian: which leads to a set of 2s Hamiltonian equations for the cyclic coordinates q, and N − s Lagrangian equations in the non cyclic coordinates ζ. Set up in this way, although the Routhian has the form of the Hamiltonian, it can be thought of a Lagrangian with N − s degrees of freedom. The coordinates q do not have to be cyclic, the partition between which coordinates enter the Hamiltonian equations and those which enter the Lagrangian equations is arbitrary. It is simply convenient to let the Hamiltonian equations remove the cyclic coordinates, leaving the non cyclic coordinates to the Lagrangian equations of motion.
https://en.wikipedia.org/wiki/Analytical_mechanics
The coordinates q do not have to be cyclic, the partition between which coordinates enter the Hamiltonian equations and those which enter the Lagrangian equations is arbitrary. It is simply convenient to let the Hamiltonian equations remove the cyclic coordinates, leaving the non cyclic coordinates to the Lagrangian equations of motion. ## Appellian mechanics Appell's equation of motion involve generalized accelerations, the second time derivatives of the generalized coordinates: as well as generalized forces mentioned above in D'Alembert's principle. The equations are where is the acceleration of the k particle, the second time derivative of its position vector. Each acceleration ak is expressed in terms of the generalized accelerations αr, likewise each rk are expressed in terms the generalized coordinates qr. ## Classical field theory ### Lagrangian field theory Generalized coordinates apply to discrete particles. For N scalar fields φi(r, t) where i = 1, 2, ... N, the Lagrangian density is a function of these fields and their space and time derivatives, and possibly the space and time coordinates themselves: and the Euler–Lagrange equations have an analogue for fields: where ∂μ denotes the 4-gradient and the summation convention has been used.
https://en.wikipedia.org/wiki/Analytical_mechanics
### Lagrangian field theory Generalized coordinates apply to discrete particles. For N scalar fields φi(r, t) where i = 1, 2, ... N, the Lagrangian density is a function of these fields and their space and time derivatives, and possibly the space and time coordinates themselves: and the Euler–Lagrange equations have an analogue for fields: where ∂μ denotes the 4-gradient and the summation convention has been used. For N scalar fields, these Lagrangian field equations are a set of N second order partial differential equations in the fields, which in general will be coupled and nonlinear. This scalar field formulation can be extended to vector fields, tensor fields, and spinor fields. The Lagrangian is the volume integral of the Lagrangian density:Gravitation, J.A. Wheeler, C. Misner, K.S. Thorne, W.H. Freeman & Co, 1973, Originally developed for classical fields, the above formulation is applicable to all physical fields in classical, quantum, and relativistic situations: such as Newtonian gravity, classical electromagnetism, general relativity, and quantum field theory. It is a question of determining the correct Lagrangian density to generate the correct field equation.
https://en.wikipedia.org/wiki/Analytical_mechanics
The Lagrangian is the volume integral of the Lagrangian density:Gravitation, J.A. Wheeler, C. Misner, K.S. Thorne, W.H. Freeman & Co, 1973, Originally developed for classical fields, the above formulation is applicable to all physical fields in classical, quantum, and relativistic situations: such as Newtonian gravity, classical electromagnetism, general relativity, and quantum field theory. It is a question of determining the correct Lagrangian density to generate the correct field equation. ### Hamiltonian field theory The corresponding "momentum" field densities conjugate to the N scalar fields φi(r, t) are: where in this context the overdot denotes a partial time derivative, not a total time derivative. The Hamiltonian density is defined by analogy with mechanics: The equations of motion are: where the variational derivative must be used instead of merely partial derivatives. For N fields, these Hamiltonian field equations are a set of 2N first order partial differential equations, which in general will be coupled and nonlinear. Again, the volume integral of the Hamiltonian density is the Hamiltonian
https://en.wikipedia.org/wiki/Analytical_mechanics
For N fields, these Hamiltonian field equations are a set of 2N first order partial differential equations, which in general will be coupled and nonlinear. Again, the volume integral of the Hamiltonian density is the Hamiltonian ## Symmetry, conservation, and Noether's theorem Symmetry transformations in classical space and time Each transformation can be described by an operator (i.e. function acting on the position r or momentum p variables to change them). The following are the cases when the operator does not change r or p, i.e. symmetries. Transformation Operator Position Momentum Translational symmetry Time translation Rotational invariance Galilean transformations Parity T-symmetry where R(n̂, θ) is the rotation matrix about an axis defined by the unit vector n̂''' and angle θ. Noether's theorem Noether's theorem states that a continuous symmetry transformation of the action corresponds to a conservation law, i.e. the action (and hence the Lagrangian) does not change under a transformation parameterized by a parameter s: the Lagrangian describes the same motion independent of s, which can be length, angle of rotation, or time. The corresponding momenta to q'' will be conserved.
https://en.wikipedia.org/wiki/Analytical_mechanics
In database theory, the PACELC design principle is an extension to the CAP theorem. It states that in case of network partitioning (P) in a distributed computer system, one has to choose between availability (A) and consistency (C) (as per the CAP theorem), but else (E), even when the system is running normally in the absence of partitions, one has to choose between latency (L) and loss of consistency (C). ## Overview The CAP theorem can be phrased as "PAC", the impossibility theorem that no distributed data store can be both consistent and available in executions that contains partitions. This can be proved by examining latency: if a system ensures consistency, then operation latencies grow with message delays, and hence operations cannot terminate eventually if the network is partitioned, i.e. the system cannot ensure availability. In the absence of partitions, both consistency and availability can be satisfied. PACELC therefore goes further and examines how the system replicates data. Specifically, in the absence of partitions, an additional trade-off (ELC) exists between latency and consistency. If the store is atomically consistent, then the sum of the read and write delay is at least the message delay.
https://en.wikipedia.org/wiki/PACELC_design_principle
Specifically, in the absence of partitions, an additional trade-off (ELC) exists between latency and consistency. If the store is atomically consistent, then the sum of the read and write delay is at least the message delay. In practice, most systems rely on explicit acknowledgments rather than timed delays to ensure delivery, requiring a full network round trip and therefore message delay on both reads and writes to ensure consistency. In low latency systems, in contrast, consistency is relaxed in order to reduce latency. There are four configurations or tradeoffs in the PACELC space: - PA/EL - prioritize availability and latency over consistency - PA/EC - when there is a partition, choose availability; else, choose consistency - PC/EL - when there is a partition, choose consistency; else, choose latency - PC/EC - choose consistency at all times PC/EC and PA/EL provide natural cognitive models for an application developer. A PC/EC system provides a firm guarantee of atomic consistency, as in ACID, while PA/EL provides high availability and low latency with a more complex consistency model. In contrast, PA/EC and PC/EL systems only make conditional guarantees of consistency. The developer still has to write code to handle the cases where the guarantee is not upheld.
https://en.wikipedia.org/wiki/PACELC_design_principle
In contrast, PA/EC and PC/EL systems only make conditional guarantees of consistency. The developer still has to write code to handle the cases where the guarantee is not upheld. PA/EC systems are rare outside of the in-memory data grid industry, where systems are localized to geographic regions and the latency vs. consistency tradeoff is not significant. PC/EL is even more tricky to understand. PC does not indicate that the system is fully consistent; rather it indicates that the system does not reduce consistency beyond the baseline consistency level when a network partition occurs—instead, it reduces availability. Some experts like Marc Brooker argue that the CAP theorem is particularly relevant in intermittently connected environments, such as those related to the Internet of Things (IoT) and mobile applications. In these contexts, devices may become partitioned due to challenging physical conditions, such as power outages or when entering confined spaces like elevators. For distributed systems, such as cloud applications, it is more appropriate to use the PACELC theorem, which is more comprehensive and considers trade-offs such as latency and consistency even in the absence of network partitions. ## History The PACELC theorem was first described by Daniel Abadi from Yale University in 2010 in a blog post, which he later clarified in a paper in 2012.
https://en.wikipedia.org/wiki/PACELC_design_principle
For distributed systems, such as cloud applications, it is more appropriate to use the PACELC theorem, which is more comprehensive and considers trade-offs such as latency and consistency even in the absence of network partitions. ## History The PACELC theorem was first described by Daniel Abadi from Yale University in 2010 in a blog post, which he later clarified in a paper in 2012. The purpose of PACELC is to address his thesis that "Ignoring the consistency/latency trade-off of replicated systems is a major oversight [in CAP], as it is present at all times during system operation, whereas CAP is only relevant in the arguably rare case of a network partition." The PACELC theorem was proved formally in 2018 in a SIGACT News article. ## Database PACELC ratings Original database PACELC ratings are from. Subsequent updates contributed by wikipedia community. - The default versions of Amazon's early (internal) Dynamo, Cassandra, Riak, and Cosmos DB are PA/EL systems: if a partition occurs, they give up consistency for availability, and under normal operation they give up consistency for lower latency. - Fully ACID systems such as VoltDB/H-Store, Megastore, MySQL Cluster, and PostgreSQL are PC/EC: they refuse to give up consistency, and will pay the availability and latency costs to achieve it.
https://en.wikipedia.org/wiki/PACELC_design_principle
Subsequent updates contributed by wikipedia community. - The default versions of Amazon's early (internal) Dynamo, Cassandra, Riak, and Cosmos DB are PA/EL systems: if a partition occurs, they give up consistency for availability, and under normal operation they give up consistency for lower latency. - Fully ACID systems such as VoltDB/H-Store, Megastore, MySQL Cluster, and PostgreSQL are PC/EC: they refuse to give up consistency, and will pay the availability and latency costs to achieve it. Bigtable and related systems such as HBase are also PC/EC. - Amazon DynamoDB (launched January 2012) is quite different from the early (Amazon internal) Dynamo which was considered for the PACELC paper. DynamoDB follows a strong leader model, where every write is strictly serialized (and conditional writes carry no penalty) and supports read-after-write consistency. This guarantee does not apply to "Global Tables" across regions. The DynamoDB SDKs use eventually consistent reads by default (improved availability and throughput), but when a consistent read is requested the service will return either a current view to the item or an error. - Couchbase provides a range of consistency and availability options during a partition, and equally a range of latency and consistency options with no partition.
https://en.wikipedia.org/wiki/PACELC_design_principle
This guarantee does not apply to "Global Tables" across regions. The DynamoDB SDKs use eventually consistent reads by default (improved availability and throughput), but when a consistent read is requested the service will return either a current view to the item or an error. - Couchbase provides a range of consistency and availability options during a partition, and equally a range of latency and consistency options with no partition. Unlike most other databases, Couchbase doesn't have a single API set nor does it scale/replicate all data services homogeneously. For writes, Couchbase favors Consistency over Availability making it formally CP, but on read there is more user-controlled variability depending on index replication, desired consistency level and type of access (single document lookup vs range scan vs full-text search, etc.). On top of that, there is then further variability depending on cross-datacenter-replication (XDCR) which takes multiple CP clusters and connects them with asynchronous replication and Couchbase Lite which is an embedded database and creates a fully multi-master (with revision tracking) distributed topology.
https://en.wikipedia.org/wiki/PACELC_design_principle
For writes, Couchbase favors Consistency over Availability making it formally CP, but on read there is more user-controlled variability depending on index replication, desired consistency level and type of access (single document lookup vs range scan vs full-text search, etc.). On top of that, there is then further variability depending on cross-datacenter-replication (XDCR) which takes multiple CP clusters and connects them with asynchronous replication and Couchbase Lite which is an embedded database and creates a fully multi-master (with revision tracking) distributed topology. - Cosmos DB supports five tunable consistency levels that allow for tradeoffs between C/A during P, and L/C during E. Cosmos DB never violates the specified consistency level, so it's formally CP. - MongoDB can be classified as a PA/EC system. In the baseline case, the system guarantees reads and writes to be consistent. - PNUTS is a PC/EL system. - Hazelcast IMDG and indeed most in-memory data grids are an implementation of a PA/EC system; Hazelcast can be configured to be EL rather than EC.
https://en.wikipedia.org/wiki/PACELC_design_principle
- Cosmos DB supports five tunable consistency levels that allow for tradeoffs between C/A during P, and L/C during E. Cosmos DB never violates the specified consistency level, so it's formally CP. - MongoDB can be classified as a PA/EC system. In the baseline case, the system guarantees reads and writes to be consistent. - PNUTS is a PC/EL system. - Hazelcast IMDG and indeed most in-memory data grids are an implementation of a PA/EC system; Hazelcast can be configured to be EL rather than EC. Concurrency primitives (Lock, AtomicReference, CountDownLatch, etc.) can be either PC/EC or PA/EC. - FaunaDB implements Calvin, a transaction protocol created by Dr. Daniel Abadi, the author of the PACELC theorem, and offers users adjustable controls for LC tradeoff. It is PC/EC for strictly serializable transactions, and EL for serializable reads. DDBS P+A P+C E+L E+CAerospikepaid onlyoptionalBigtable/HBase Cassandra Cosmos DB Couchbase Dynamo DynamoDB FaunaDB Hazelcast IMDG Megastore MongoDB MySQL Cluster PNUTS PostgreSQL Riak SpiceDB VoltDB/H-Store
https://en.wikipedia.org/wiki/PACELC_design_principle
In computer science, a binary search tree (BST), also called an ordered or sorted binary tree, is a rooted binary tree data structure with the key of each internal node being greater than all the keys in the respective node's left subtree and less than the ones in its right subtree. The time complexity of operations on the binary search tree is linear with respect to the height of the tree. Binary search trees allow binary search for fast lookup, addition, and removal of data items. Since the nodes in a BST are laid out so that each comparison skips about half of the remaining tree, the lookup performance is proportional to that of binary logarithm. BSTs were devised in the 1960s for the problem of efficient storage of labeled data and are attributed to Conway Berners-Lee and David Wheeler. The performance of a binary search tree is dependent on the order of insertion of the nodes into the tree since arbitrary insertions may lead to degeneracy; several variations of the binary search tree can be built with guaranteed worst-case performance. The basic operations include: search, traversal, insert and delete. BSTs with guaranteed worst-case complexities perform better than an unsorted array, which would require linear search time.
https://en.wikipedia.org/wiki/Binary_search_tree
The basic operations include: search, traversal, insert and delete. BSTs with guaranteed worst-case complexities perform better than an unsorted array, which would require linear search time. The complexity analysis of BST shows that, on average, the insert, delete and search takes $$ O(\log n) $$ for $$ n $$ nodes. In the worst case, they degrade to that of a singly linked list: $$ O(n) $$ . To address the boundless increase of the tree height with arbitrary insertions and deletions, self-balancing variants of BSTs are introduced to bound the worst lookup complexity to that of the binary logarithm. AVL trees were the first self-balancing binary search trees, invented in 1962 by Georgy Adelson-Velsky and Evgenii Landis. Binary search trees can be used to implement abstract data types such as dynamic sets, lookup tables and priority queues, and used in sorting algorithms such as tree sort. ## History The binary search tree algorithm was discovered independently by several researchers, including P.F. Windley, Andrew Donald Booth, Andrew Colin, Thomas N. Hibbard. The algorithm is attributed to Conway Berners-Lee and David Wheeler, who used it for storing labeled data in magnetic tapes in 1960.
https://en.wikipedia.org/wiki/Binary_search_tree
## History The binary search tree algorithm was discovered independently by several researchers, including P.F. Windley, Andrew Donald Booth, Andrew Colin, Thomas N. Hibbard. The algorithm is attributed to Conway Berners-Lee and David Wheeler, who used it for storing labeled data in magnetic tapes in 1960. One of the earliest and popular binary search tree algorithm is that of Hibbard. The time complexity of a binary search tree increases boundlessly with the tree height if the nodes are inserted in an arbitrary order, therefore self-balancing binary search trees were introduced to bound the height of the tree to $$ O(\log n) $$ . Various height-balanced binary search trees were introduced to confine the tree height, such as AVL trees, Treaps, and red–black trees. The AVL tree was invented by Georgy Adelson-Velsky and Evgenii Landis in 1962 for the efficient organization of information. English translation by Myron J. Ricci in Soviet Mathematics - Doklady, 3:1259–1263, 1962. It was the first self-balancing binary search tree to be invented.
https://en.wikipedia.org/wiki/Binary_search_tree
English translation by Myron J. Ricci in Soviet Mathematics - Doklady, 3:1259–1263, 1962. It was the first self-balancing binary search tree to be invented. ## Overview A binary search tree is a rooted binary tree in which nodes are arranged in strict total order in which the nodes with keys greater than any particular node A is stored on the right sub-trees to that node A and the nodes with keys equal to or less than A are stored on the left sub-trees to A, satisfying the binary search property. Binary search trees are also efficacious in sortings and search algorithms. However, the search complexity of a BST depends upon the order in which the nodes are inserted and deleted; since in worst case, successive operations in the binary search tree may lead to degeneracy and form a singly linked list (or "unbalanced tree") like structure, thus has the same worst-case complexity as a linked list. Binary search trees are also a fundamental data structure used in construction of abstract data structures such as sets, multisets, and associative arrays. ## Operations ### Searching Searching in a binary search tree for a specific key can be programmed recursively or iteratively. Searching begins by examining the root node.
https://en.wikipedia.org/wiki/Binary_search_tree
### Searching Searching in a binary search tree for a specific key can be programmed recursively or iteratively. Searching begins by examining the root node. If the tree is , the key being searched for does not exist in the tree. Otherwise, if the key equals that of the root, the search is successful and the node is returned. If the key is less than that of the root, the search proceeds by examining the left subtree. Similarly, if the key is greater than that of the root, the search proceeds by examining the right subtree. This process is repeated until the key is found or the remaining subtree is $$ \text{nil} $$ . If the searched key is not found after a $$ \text{nil} $$ subtree is reached, then the key is not present in the tree. #### Recursive search The following pseudocode implements the BST search procedure through recursion. Recursive-Tree-Search(x, key) if x = NIL or key = x.key then return x if key < x.key then return Recursive-Tree-Search(x.left, key) else return Recursive-Tree-Search(x.right, key) end if The recursive procedure continues until a $$ \text{nil} $$ or the $$ \text{key} $$ being searched for are encountered.
https://en.wikipedia.org/wiki/Binary_search_tree
#### Recursive search The following pseudocode implements the BST search procedure through recursion. Recursive-Tree-Search(x, key) if x = NIL or key = x.key then return x if key < x.key then return Recursive-Tree-Search(x.left, key) else return Recursive-Tree-Search(x.right, key) end if The recursive procedure continues until a $$ \text{nil} $$ or the $$ \text{key} $$ being searched for are encountered. #### Iterative search The recursive version of the search can be "unrolled" into a while loop. On most machines, the iterative version is found to be more efficient. Iterative-Tree-Search(x, key) _ BLOCK0_Since the search may proceed till some leaf node, the running time complexity of BST search is $$ O(h) $$ where $$ h $$ is the height of the tree. However, the worst case for BST search is $$ O(n) $$ where $$ n $$ is the total number of nodes in the BST, because an unbalanced BST may degenerate to a linked list. However, if the BST is height-balanced the height is $$ O(\log n) $$ .
https://en.wikipedia.org/wiki/Binary_search_tree
However, the worst case for BST search is $$ O(n) $$ where $$ n $$ is the total number of nodes in the BST, because an unbalanced BST may degenerate to a linked list. However, if the BST is height-balanced the height is $$ O(\log n) $$ . #### Successor and predecessor For certain operations, given a node $$ \text{x} $$ , finding the successor or predecessor of $$ \text{x} $$ is crucial. Assuming all the keys of a BST are distinct, the successor of a node $$ \text{x} $$ in a BST is the node with the smallest key greater than $$ \text{x} $$ 's key. On the other hand, the predecessor of a node $$ \text{x} $$ in a BST is the node with the largest key smaller than $$ \text{x} $$ 's key. The following pseudocode finds the successor and predecessor of a node $$ \text{x} $$ in a BST.
https://en.wikipedia.org/wiki/Binary_search_tree
On the other hand, the predecessor of a node $$ \text{x} $$ in a BST is the node with the largest key smaller than $$ \text{x} $$ 's key. The following pseudocode finds the successor and predecessor of a node $$ \text{x} $$ in a BST. BST-Successor(x) if x.right ≠ NIL then return BST-Minimum(x.right) end if y := x.parent while y ≠ NIL and x = y.right do x := y y := y.parent repeat return y BST-Predecessor(x) if x.left ≠ NIL then return BST-Maximum(x.left) end if y := x.parent while y ≠ NIL and x = y.left do x := y y := y.parent repeat return y Operations such as finding a node in a BST whose key is the maximum or minimum are critical in certain operations, such as determining the successor and predecessor of nodes. Following is the pseudocode for the operations. BST-Maximum(x) while x.right ≠ NIL do x := x.right repeat return x BST-Minimum(x) while x.left ≠ NIL do x := x.left repeat return x
https://en.wikipedia.org/wiki/Binary_search_tree
Following is the pseudocode for the operations. BST-Maximum(x) while x.right ≠ NIL do x := x.right repeat return x BST-Minimum(x) while x.left ≠ NIL do x := x.left repeat return x ### Insertion Operations such as insertion and deletion cause the BST representation to change dynamically. The data structure must be modified in such a way that the properties of BST continue to hold. New nodes are inserted as leaf nodes in the BST. Following is an iterative implementation of the insertion operation. 1 BST-Insert(T, z) 2 y := NIL 3 x := T.root 4 while x ≠ NIL do 5 y := x 6 if z.key < x.key then 7 x := x.left 8 else 9 x := x.right 10 end if 11 repeat 12 z.parent := y 13 if y = NIL then 14 T.root := z 15 else if z.key < y.key then 16 y.left := z 17 else 18 y.right := z 19 end if The procedure maintains a "trailing pointer" $$ \text{y} $$ as a parent of $$ \text{x} $$ . After initialization on line 2, the while loop along lines 4-11 causes the pointers to be updated.
https://en.wikipedia.org/wiki/Binary_search_tree
if z.key < y.key then 16 y.left := z 17 else 18 y.right := z 19 end if The procedure maintains a "trailing pointer" $$ \text{y} $$ as a parent of $$ \text{x} $$ . After initialization on line 2, the while loop along lines 4-11 causes the pointers to be updated. If $$ \text{y} $$ is $$ \text{nil} $$ , the BST is empty, thus $$ \text{z} $$ is inserted as the root node of the binary search tree $$ \text{T} $$ , if it is not $$ \text{nil} $$ , insertion proceeds by comparing the keys to that of $$ \text{y} $$ on the lines 15-19 and the node is inserted accordingly. ### Deletion The deletion of a node, say $$ \text{Z} $$ , from the binary search tree _ BLOCK1_ has three cases: 1. If $$ \text{Z} $$ is a leaf node, it is replaced by $$ \text{NIL} $$ as shown in (a). 1.
https://en.wikipedia.org/wiki/Binary_search_tree
If $$ \text{Z} $$ is a leaf node, it is replaced by $$ \text{NIL} $$ as shown in (a). 1. If $$ \text{Z} $$ has only one child, the child node of $$ \text{Z} $$ gets elevated by modifying the parent node of $$ \text{Z} $$ to point to the child node, consequently taking $$ \text{Z} $$ 's position in the tree, as shown in (b) and (c). 1. If $$ \text{Z} $$ has both left and right children, the in-order successor of $$ \text{Z} $$ , say $$ \text{Y} $$ , displaces $$ \text{Z} $$ by following the two cases: 1. If $$ \text{Y} $$ is $$ \text{Z} $$ 's right child, as shown in (d), $$ \text{Y} $$ displaces $$ \text{Z} $$ and $$ \text{Y} $$ 's right child remain unchanged. 1.
https://en.wikipedia.org/wiki/Binary_search_tree
If $$ \text{Y} $$ is $$ \text{Z} $$ 's right child, as shown in (d), $$ \text{Y} $$ displaces $$ \text{Z} $$ and $$ \text{Y} $$ 's right child remain unchanged. 1. If $$ \text{Y} $$ lies within $$ \text{Z} $$ 's right subtree but is not $$ \text{Z} $$ 's right child, as shown in (e), $$ \text{Y} $$ first gets replaced by its own right child, and then it displaces $$ \text{Z} $$ 's position in the tree. 1. Alternatively, the in-order predecessor can also be used. The following pseudocode implements the deletion operation in a binary search tree. 1 BST-Delete(BST, z) 2
https://en.wikipedia.org/wiki/Binary_search_tree
The following pseudocode implements the deletion operation in a binary search tree. 1 BST-Delete(BST, z) 2 if z.left = NIL then 3 Shift-Nodes(BST, z, z.right) 4 else if z.right = NIL then 5 Shift-Nodes(BST, z, z.left) 6 else 7 y := BST-Successor(z) 8 if y.parent ≠ z then 9 Shift-Nodes(BST, y, y.right) 10 y.right := z.right 11 y.right.parent := y 12 end if 13 Shift-Nodes(BST, z, y) 14 y.left := z.left 15 y.left.parent := y 16 end if 1 Shift-Nodes(BST, u, v) 2 if u.parent = NIL then 3 BST.root := v 4 else if u = u.parent.left then 5 u.parent.left := v 5 else 6 u.parent.right := v 7 end if 8 if v ≠ NIL then 9 v.parent := u.parent 10 end if The $$ \text{BST-Delete} $$ procedure deals with the 3 special cases mentioned above. Lines 2-3 deal with case 1; lines 4-5 deal with case 2 and lines 6-16 for case 3.
https://en.wikipedia.org/wiki/Binary_search_tree
if z.left = NIL then 3 Shift-Nodes(BST, z, z.right) 4 else if z.right = NIL then 5 Shift-Nodes(BST, z, z.left) 6 else 7 y := BST-Successor(z) 8 if y.parent ≠ z then 9 Shift-Nodes(BST, y, y.right) 10 y.right := z.right 11 y.right.parent := y 12 end if 13 Shift-Nodes(BST, z, y) 14 y.left := z.left 15 y.left.parent := y 16 end if 1 Shift-Nodes(BST, u, v) 2 if u.parent = NIL then 3 BST.root := v 4 else if u = u.parent.left then 5 u.parent.left := v 5 else 6 u.parent.right := v 7 end if 8 if v ≠ NIL then 9 v.parent := u.parent 10 end if The $$ \text{BST-Delete} $$ procedure deals with the 3 special cases mentioned above. Lines 2-3 deal with case 1; lines 4-5 deal with case 2 and lines 6-16 for case 3. The helper function $$ \text{Shift-Nodes} $$ is used within the deletion algorithm for the purpose of replacing the node $$ \text{u} $$ with $$ \text{v} $$ in the binary search tree $$ \text{BST} $$ .
https://en.wikipedia.org/wiki/Binary_search_tree
Lines 2-3 deal with case 1; lines 4-5 deal with case 2 and lines 6-16 for case 3. The helper function $$ \text{Shift-Nodes} $$ is used within the deletion algorithm for the purpose of replacing the node $$ \text{u} $$ with $$ \text{v} $$ in the binary search tree $$ \text{BST} $$ . This procedure handles the deletion (and substitution) of $$ \text{u} $$ from $$ \text{BST} $$ . ## Traversal A BST can be traversed through three basic algorithms: inorder, preorder, and postorder tree walks. - Inorder tree walk: Nodes from the left subtree get visited first, followed by the root node and right subtree. Such a traversal visits all the nodes in the order of non-decreasing key sequence. - Preorder tree walk: The root node gets visited first, followed by left and right subtrees. - Postorder tree walk: Nodes from the left subtree get visited first, followed by the right subtree, and finally, the root. Following is a recursive implementation of the tree walks.
https://en.wikipedia.org/wiki/Binary_search_tree
- Postorder tree walk: Nodes from the left subtree get visited first, followed by the right subtree, and finally, the root. Following is a recursive implementation of the tree walks. Inorder-Tree-Walk(x) if x ≠ NIL then Inorder-Tree-Walk(x.left) visit node Inorder-Tree-Walk(x.right) end if Preorder-Tree-Walk(x) if x ≠ NIL then visit node Preorder-Tree-Walk(x.left) Preorder-Tree-Walk(x.right) end if Postorder-Tree-Walk(x) if x ≠ NIL then Postorder-Tree-Walk(x.left) Postorder-Tree-Walk(x.right) visit node end if ## Balanced binary search trees Without rebalancing, insertions or deletions in a binary search tree may lead to degeneration, resulting in a height $$ n $$ of the tree (where $$ n $$ is number of items in a tree), so that the lookup performance is deteriorated to that of a linear search. Keeping the search tree balanced and height bounded by $$ O(\log n) $$ is a key to the usefulness of the binary search tree.
https://en.wikipedia.org/wiki/Binary_search_tree
## Balanced binary search trees Without rebalancing, insertions or deletions in a binary search tree may lead to degeneration, resulting in a height $$ n $$ of the tree (where $$ n $$ is number of items in a tree), so that the lookup performance is deteriorated to that of a linear search. Keeping the search tree balanced and height bounded by $$ O(\log n) $$ is a key to the usefulness of the binary search tree. This can be achieved by "self-balancing" mechanisms during the updation operations to the tree designed to maintain the tree height to the binary logarithmic complexity. ### Height-balanced trees A tree is height-balanced if the heights of the left sub-tree and right sub-tree are guaranteed to be related by a constant factor. This property was introduced by the AVL tree and continued by the red–black tree. The heights of all the nodes on the path from the root to the modified leaf node have to be observed and possibly corrected on every insert and delete operation to the tree. ### Weight-balanced trees In a weight-balanced tree , the criterion of a balanced tree is the number of leaves of the subtrees. The weights of the left and right subtrees differ at most by $$ 1 $$ .
https://en.wikipedia.org/wiki/Binary_search_tree
, the criterion of a balanced tree is the number of leaves of the subtrees. The weights of the left and right subtrees differ at most by $$ 1 $$ . However, the difference is bound by a ratio $$ \alpha $$ of the weights, since a strong balance condition of $$ 1 $$ cannot be maintained with $$ O(\log n) $$ rebalancing work during insert and delete operations. The $$ \alpha $$ -weight-balanced trees gives an entire family of balance conditions, where each left and right subtrees have each at least a fraction of $$ \alpha $$ of the total weight of the subtree. ### Types There are several self-balanced binary search trees, including T-tree, treap, red-black tree, B-tree, 2–3 tree, and Splay tree. ## Examples of applications ### Sort Binary search trees are used in sorting algorithms such as tree sort, where all the elements are inserted at once and the tree is traversed at an in-order fashion. BSTs are also used in quicksort. ### Priority queue operations Binary search trees are used in implementing priority queues, using the node's key as priorities.
https://en.wikipedia.org/wiki/Binary_search_tree
BSTs are also used in quicksort. ### Priority queue operations Binary search trees are used in implementing priority queues, using the node's key as priorities. Adding new elements to the queue follows the regular BST insertion operation but the removal operation depends on the type of priority queue: - If it is an ascending order priority queue, removal of an element with the lowest priority is done through leftward traversal of the BST. - If it is a descending order priority queue, removal of an element with the highest priority is done through rightward traversal of the BST.
https://en.wikipedia.org/wiki/Binary_search_tree
The P versus NP problem is a major unsolved problem in theoretical computer science. Informally, it asks whether every problem whose solution can be quickly verified can also be quickly solved. Here, "quickly" means an algorithm exists that solves the task and runs in polynomial time (as opposed to, say, exponential time), meaning the task completion time is bounded above by a polynomial function on the size of the input to the algorithm. The general class of questions that some algorithm can answer in polynomial time is "P" or "class P". For some questions, there is no known way to find an answer quickly, but if provided with an answer, it can be verified quickly. The class of questions where an answer can be verified in polynomial time is "NP", standing for "nondeterministic polynomial time". An answer to the P versus NP question would determine whether problems that can be verified in polynomial time can also be solved in polynomial time. If P ≠ NP, which is widely believed, it would mean that there are problems in NP that are harder to compute than to verify: they could not be solved in polynomial time, but the answer could be verified in polynomial time. The problem has been called the most important open problem in computer science.
https://en.wikipedia.org/wiki/P_versus_NP_problem
If P ≠ NP, which is widely believed, it would mean that there are problems in NP that are harder to compute than to verify: they could not be solved in polynomial time, but the answer could be verified in polynomial time. The problem has been called the most important open problem in computer science. Aside from being an important problem in computational theory, a proof either way would have profound implications for mathematics, cryptography, algorithm research, artificial intelligence, game theory, multimedia processing, philosophy, economics and many other fields. It is one of the seven Millennium Prize Problems selected by the Clay Mathematics Institute, each of which carries a US$1,000,000 prize for the first correct solution. ## #### Example Consider the following yes/no problem: given an incomplete Sudoku grid of size $$ n^2 \times n^2 $$ , is there at least one legal solution where every row, column, and $$ n \times n $$ square contains the integers 1 through $$ n^2 $$ ? It is straightforward to verify "yes" instances of this generalized Sudoku problem given a candidate solution. However, it is not known whether there is a polynomial-time algorithm that can correctly answer "yes" or "no" to all instances of this problem.
https://en.wikipedia.org/wiki/P_versus_NP_problem
It is straightforward to verify "yes" instances of this generalized Sudoku problem given a candidate solution. However, it is not known whether there is a polynomial-time algorithm that can correctly answer "yes" or "no" to all instances of this problem. Therefore, generalized Sudoku is in NP (quickly verifiable), but may or may not be in P (quickly solvable). (It is necessary to consider a generalized version of Sudoku, as any fixed size Sudoku has only a finite number of possible grids. In this case the problem is in P, as the answer can be found by table lookup.) ## History The precise statement of the P versus NP problem was introduced in 1971 by Stephen Cook in his seminal paper "The complexity of theorem proving procedures" (and independently by Leonid Levin in 1973). Although the P versus NP problem was formally defined in 1971, there were previous inklings of the problems involved, the difficulty of proof, and the potential consequences. In 1955, mathematician John Nash wrote a letter to the NSA, speculating that cracking a sufficiently complex code would require time exponential in the length of the key. If proved (and Nash was suitably skeptical), this would imply what is now called P ≠ NP, since a proposed key can be verified in polynomial time.
https://en.wikipedia.org/wiki/P_versus_NP_problem
End of preview. Expand in Data Studio

No dataset card yet

Downloads last month
6