text
stringlengths
105
4.17k
source
stringclasses
883 values
Each residue represents the relative contribution of that singularity to the transfer function's overall shape. By the residue theorem, the inverse Laplace transform depends only upon the poles and their residues. To find the residue , we multiply both sides of the equation by to get $$ \frac{1}{s + \beta} = P + { R (s + \alpha) \over s + \beta }. $$ Then by letting , the contribution from vanishes and all that is left is $$ P = \left.{1 \over s+\beta}\right|_{s=-\alpha} = {1 \over \beta - \alpha}. $$ Similarly, the residue is given by $$ R = \left.{1 \over s + \alpha}\right|_{s=-\beta} = {1 \over \alpha - \beta}. $$ Note that $$ R = {-1 \over \beta - \alpha} = - P $$ and
https://en.wikipedia.org/wiki/Laplace_transform
By the residue theorem, the inverse Laplace transform depends only upon the poles and their residues. To find the residue , we multiply both sides of the equation by to get $$ \frac{1}{s + \beta} = P + { R (s + \alpha) \over s + \beta }. $$ Then by letting , the contribution from vanishes and all that is left is $$ P = \left.{1 \over s+\beta}\right|_{s=-\alpha} = {1 \over \beta - \alpha}. $$ Similarly, the residue is given by $$ R = \left.{1 \over s + \alpha}\right|_{s=-\beta} = {1 \over \alpha - \beta}. $$ Note that $$ R = {-1 \over \beta - \alpha} = - P $$ and so the substitution of and into the expanded expression for gives $$ H(s) = \left(\frac{1}{\beta - \alpha} \right) \cdot \left( { 1 \over s + \alpha } - { 1 \over s + \beta } \right). $$ Finally, using the linearity property and the known transform for exponential decay (see Item #3 in the Table of Laplace Transforms, above), we can take the inverse Laplace transform of to obtain $$ h(t) = \mathcal{L}^{-1}\{H(s)\} = \frac{1}{\beta - \alpha}\left(e^{-\alpha t} - e^{-\beta t}\right), $$ which is the impulse response of the system.
https://en.wikipedia.org/wiki/Laplace_transform
To find the residue , we multiply both sides of the equation by to get $$ \frac{1}{s + \beta} = P + { R (s + \alpha) \over s + \beta }. $$ Then by letting , the contribution from vanishes and all that is left is $$ P = \left.{1 \over s+\beta}\right|_{s=-\alpha} = {1 \over \beta - \alpha}. $$ Similarly, the residue is given by $$ R = \left.{1 \over s + \alpha}\right|_{s=-\beta} = {1 \over \alpha - \beta}. $$ Note that $$ R = {-1 \over \beta - \alpha} = - P $$ and so the substitution of and into the expanded expression for gives $$ H(s) = \left(\frac{1}{\beta - \alpha} \right) \cdot \left( { 1 \over s + \alpha } - { 1 \over s + \beta } \right). $$ Finally, using the linearity property and the known transform for exponential decay (see Item #3 in the Table of Laplace Transforms, above), we can take the inverse Laplace transform of to obtain $$ h(t) = \mathcal{L}^{-1}\{H(s)\} = \frac{1}{\beta - \alpha}\left(e^{-\alpha t} - e^{-\beta t}\right), $$ which is the impulse response of the system. Convolution The same result can be achieved using the convolution property as if the system is a series of filters with transfer functions and .
https://en.wikipedia.org/wiki/Laplace_transform
so the substitution of and into the expanded expression for gives $$ H(s) = \left(\frac{1}{\beta - \alpha} \right) \cdot \left( { 1 \over s + \alpha } - { 1 \over s + \beta } \right). $$ Finally, using the linearity property and the known transform for exponential decay (see Item #3 in the Table of Laplace Transforms, above), we can take the inverse Laplace transform of to obtain $$ h(t) = \mathcal{L}^{-1}\{H(s)\} = \frac{1}{\beta - \alpha}\left(e^{-\alpha t} - e^{-\beta t}\right), $$ which is the impulse response of the system. Convolution The same result can be achieved using the convolution property as if the system is a series of filters with transfer functions and . That is, the inverse of $$ H(s) = \frac{1}{(s + \alpha)(s + \beta)} = \frac{1}{s+\alpha} \cdot \frac{1}{s + \beta} $$ is $$ \mathcal{L}^{-1}\! \left\{ \frac{1}{s + \alpha} \right\} * \mathcal{L}^{-1}\! \left\{\frac{1}{s + \beta} \right\} = e^{-\alpha t} * e^{-\beta t} = \int_0^t e^{-\alpha x}e^{-\beta (t - x)}\, dx = \frac{e^{-\alpha t}-e^{-\beta t}}{\beta - \alpha}. $$
https://en.wikipedia.org/wiki/Laplace_transform
### Phase delay Time function Laplace transform Starting with the Laplace transform, $$ X(s) = \frac{s\sin(\varphi) + \omega \cos(\varphi)}{s^2 + \omega^2} $$ we find the inverse by first rearranging terms in the fraction: $$ \begin{align} X(s) &= \frac{s \sin(\varphi)}{s^2 + \omega^2} + \frac{\omega \cos(\varphi)}{s^2 + \omega^2} \\ _BLOCK0_\end{align} $$ We are now able to take the inverse Laplace transform of our terms: $$ \begin{align} x(t) &= \sin(\varphi) \mathcal{L}^{-1}\left\{\frac{s}{s^2 + \omega^2} \right\} + \cos(\varphi) \mathcal{L}^{-1}\left\{\frac{\omega}{s^2 + \omega^2} \right\} \\ _BLOCK1_\end{align} $$ This is just the sine of the sum of the arguments, yielding: $$ x(t) = \sin (\omega t + \varphi). $$ We can apply similar logic to find that $$ \mathcal{L}^{-1} \left\{ \frac{s\cos\varphi - \omega \sin\varphi}{s^2 + \omega^2} \right\} = \cos{(\omega t + \varphi)}. $$
https://en.wikipedia.org/wiki/Laplace_transform
### Statistical mechanics In statistical mechanics, the Laplace transform of the density of states $$ g(E) $$ defines the partition function. That is, the canonical partition function _ BLOCK1_ is given by $$ Z(\beta) = \int_0^\infty e^{-\beta E}g(E)\,dE $$ and the inverse is given by $$ g(E) = \frac{1}{2\pi i} \int_{\beta_0-i\infty}^{\beta_0+i\infty} e^{\beta E}Z(\beta) \, d\beta $$ ### Spatial (not time) structure from astronomical spectrum The wide and general applicability of the Laplace transform and its inverse is illustrated by an application in astronomy which provides some information on the spatial distribution of matter of an astronomical source of radiofrequency thermal radiation too distant to resolve as more than a point, given its flux density spectrum, rather than relating the time domain with the spectrum (frequency domain). Assuming certain properties of the object, e.g. spherical shape and constant temperature, calculations based on carrying out an inverse Laplace transformation on the spectrum of the object can produce the only possible model of the distribution of matter in it (density as a function of distance from the center) consistent with the spectrum.
https://en.wikipedia.org/wiki/Laplace_transform
### Spatial (not time) structure from astronomical spectrum The wide and general applicability of the Laplace transform and its inverse is illustrated by an application in astronomy which provides some information on the spatial distribution of matter of an astronomical source of radiofrequency thermal radiation too distant to resolve as more than a point, given its flux density spectrum, rather than relating the time domain with the spectrum (frequency domain). Assuming certain properties of the object, e.g. spherical shape and constant temperature, calculations based on carrying out an inverse Laplace transformation on the spectrum of the object can produce the only possible model of the distribution of matter in it (density as a function of distance from the center) consistent with the spectrum. When independent information on the structure of an object is available, the inverse Laplace transform method has been found to be in good agreement. ### Birth and death processes Consider a random walk, with steps $$ \{+1,-1\} $$ occurring with probabilities $$ p,q=1-p $$ . Suppose also that the time step is an Poisson process, with parameter $$ \lambda $$ . Then the probability of the walk being at the lattice point $$ n $$ at time $$ t $$
https://en.wikipedia.org/wiki/Laplace_transform
Suppose also that the time step is an Poisson process, with parameter $$ \lambda $$ . Then the probability of the walk being at the lattice point $$ n $$ at time $$ t $$ is $$ P_n(t) = \int_0^t\lambda e^{-\lambda(t-s)}(pP_{n-1}(s) + qP_{n+1}(s))\,ds\quad (+e^{-\lambda t}\quad\text{when}\ n=0). $$ This leads to a system of integral equations (or equivalently a system of differential equations). However, because it is a system of convolution equations, the Laplace transform converts it into a system of linear equations for $$ \pi_n(s) = \mathcal L(P_n)(s), $$ namely: $$ \pi_n(s) = \frac{\lambda}{\lambda+s}(p\pi_{n-1}(s) + q\pi_{n+1}(s))\quad (+\frac1{\lambda + s}\quad \text{when}\ n=0) $$ which may now be solved by standard methods.
https://en.wikipedia.org/wiki/Laplace_transform
is $$ P_n(t) = \int_0^t\lambda e^{-\lambda(t-s)}(pP_{n-1}(s) + qP_{n+1}(s))\,ds\quad (+e^{-\lambda t}\quad\text{when}\ n=0). $$ This leads to a system of integral equations (or equivalently a system of differential equations). However, because it is a system of convolution equations, the Laplace transform converts it into a system of linear equations for $$ \pi_n(s) = \mathcal L(P_n)(s), $$ namely: $$ \pi_n(s) = \frac{\lambda}{\lambda+s}(p\pi_{n-1}(s) + q\pi_{n+1}(s))\quad (+\frac1{\lambda + s}\quad \text{when}\ n=0) $$ which may now be solved by standard methods. ### Tauberian theory The Laplace transform of the measure $$ \mu $$ on $$ [0,\infty) $$ is given by $$ \mathcal L\mu(s) = \int_0^\infty e^{-st}d\mu(t). $$ It is intuitively clear that, for small $$ s>0 $$ , the exponentially decaying integrand will become more sensitive to the concentration of the measure $$ \mu $$ on larger subsets of the domain.
https://en.wikipedia.org/wiki/Laplace_transform
However, because it is a system of convolution equations, the Laplace transform converts it into a system of linear equations for $$ \pi_n(s) = \mathcal L(P_n)(s), $$ namely: $$ \pi_n(s) = \frac{\lambda}{\lambda+s}(p\pi_{n-1}(s) + q\pi_{n+1}(s))\quad (+\frac1{\lambda + s}\quad \text{when}\ n=0) $$ which may now be solved by standard methods. ### Tauberian theory The Laplace transform of the measure $$ \mu $$ on $$ [0,\infty) $$ is given by $$ \mathcal L\mu(s) = \int_0^\infty e^{-st}d\mu(t). $$ It is intuitively clear that, for small $$ s>0 $$ , the exponentially decaying integrand will become more sensitive to the concentration of the measure $$ \mu $$ on larger subsets of the domain. To make this more precise, introduce the distribution function: $$ M(t) = \mu([0,t)). $$ Formally, we expect a limit of the following kind: $$ \lim_{s\to 0^+}\mathcal L\mu(s) = \lim_{t\to\infty} M(t). $$ Tauberian theorems are theorems relating the asymptotics of the Laplace transform, as $$ s\to 0^+ $$ , to those of the distribution of $$ \mu $$ as $$ t\to\infty $$ .
https://en.wikipedia.org/wiki/Laplace_transform
### Tauberian theory The Laplace transform of the measure $$ \mu $$ on $$ [0,\infty) $$ is given by $$ \mathcal L\mu(s) = \int_0^\infty e^{-st}d\mu(t). $$ It is intuitively clear that, for small $$ s>0 $$ , the exponentially decaying integrand will become more sensitive to the concentration of the measure $$ \mu $$ on larger subsets of the domain. To make this more precise, introduce the distribution function: $$ M(t) = \mu([0,t)). $$ Formally, we expect a limit of the following kind: $$ \lim_{s\to 0^+}\mathcal L\mu(s) = \lim_{t\to\infty} M(t). $$ Tauberian theorems are theorems relating the asymptotics of the Laplace transform, as $$ s\to 0^+ $$ , to those of the distribution of $$ \mu $$ as $$ t\to\infty $$ . They are thus of importance in asymptotic formulae of probability and statistics, where often the spectral side has asymptotics that are simpler to infer. Two Tauberian theorems of note are the Hardy–Littlewood Tauberian theorem and Wiener's Tauberian theorem.
https://en.wikipedia.org/wiki/Laplace_transform
They are thus of importance in asymptotic formulae of probability and statistics, where often the spectral side has asymptotics that are simpler to infer. Two Tauberian theorems of note are the Hardy–Littlewood Tauberian theorem and Wiener's Tauberian theorem. The Wiener theorem generalizes the Ikehara Tauberian theorem, which is the following statement: Let A(x) be a non-negative, monotonic nondecreasing function of x, defined for 0 ≤ x < ∞. Suppose that $$ f(s)=\int_0^\infty A(x) e^{-xs}\,dx $$ converges for ℜ(s) > 1 to the function ƒ(s) and that, for some non-negative number c, $$ f(s) - \frac{c}{s-1} $$ has an extension as a continuous function for ℜ(s) ≥ 1. Then the limit as x goes to infinity of e−x A(x) is equal to c. This statement can be applied in particular to the logarithmic derivative of Riemann zeta function, and thus provides an extremely short way to prove the prime number theorem.
https://en.wikipedia.org/wiki/Laplace_transform
In abstract algebra, a finite group is a group whose underlying set is finite. Finite groups often arise when considering symmetry of mathematical or physical objects, when those objects admit just a finite number of structure-preserving transformations. Important examples of finite groups include cyclic groups and permutation groups. The study of finite groups has been an integral part of group theory since it arose in the 19th century. One major area of study has been classification: the classification of finite simple groups (those with no nontrivial normal subgroup) was completed in 2004. ## History During the twentieth century, mathematicians investigated some aspects of the theory of finite groups in great depth, especially the local theory of finite groups and the theory of solvable and nilpotent groups. Daniel Gorenstein (1985), "The Enormous Theorem", Scientific American, December 1, 1985, vol. 253, no. 6, pp. 104–115. As a consequence, the complete classification of finite simple groups was achieved, meaning that all those simple groups from which all finite groups can be built are now known. During the second half of the twentieth century, mathematicians such as Chevalley and Steinberg also increased our understanding of finite analogs of classical groups, and other related groups. One such family of groups is the family of general linear groups over finite fields.
https://en.wikipedia.org/wiki/Finite_group
During the second half of the twentieth century, mathematicians such as Chevalley and Steinberg also increased our understanding of finite analogs of classical groups, and other related groups. One such family of groups is the family of general linear groups over finite fields. Finite groups often occur when considering symmetry of mathematical or physical objects, when those objects admit just a finite number of structure-preserving transformations. The theory of Lie groups, which may be viewed as dealing with "continuous symmetry", is strongly influenced by the associated Weyl groups. These are finite groups generated by reflections which act on a finite-dimensional Euclidean space. The properties of finite groups can thus play a role in subjects such as theoretical physics and chemistry. ## Examples ### Permutation groups The symmetric group Sn on a finite set of n symbols is the group whose elements are all the permutations of the n symbols, and whose group operation is the composition of such permutations, which are treated as bijective functions from the set of symbols to itself. Since there are n! (n factorial) possible permutations of a set of n symbols, it follows that the order (the number of elements) of the symmetric group Sn is n!.
https://en.wikipedia.org/wiki/Finite_group
### Permutation groups The symmetric group Sn on a finite set of n symbols is the group whose elements are all the permutations of the n symbols, and whose group operation is the composition of such permutations, which are treated as bijective functions from the set of symbols to itself. Since there are n! (n factorial) possible permutations of a set of n symbols, it follows that the order (the number of elements) of the symmetric group Sn is n!. ### Cyclic groups A cyclic group Zn is a group all of whose elements are powers of a particular element a where , the identity. A typical realization of this group is as the complex roots of unity. Sending a to a primitive root of unity gives an isomorphism between the two. This can be done with any finite cyclic group. ### Finite abelian groups An abelian group, also called a commutative group, is a group in which the result of applying the group operation to two group elements does not depend on their order (the axiom of commutativity). They are named after Niels Henrik Abel. An arbitrary finite abelian group is isomorphic to a direct sum of finite cyclic groups of prime power order, and these orders are uniquely determined, forming a complete system of invariants.
https://en.wikipedia.org/wiki/Finite_group
They are named after Niels Henrik Abel. An arbitrary finite abelian group is isomorphic to a direct sum of finite cyclic groups of prime power order, and these orders are uniquely determined, forming a complete system of invariants. The automorphism group of a finite abelian group can be described directly in terms of these invariants. The theory had been first developed in the 1879 paper of Georg Frobenius and Ludwig Stickelberger and later was both simplified and generalized to finitely generated modules over a principal ideal domain, forming an important chapter of linear algebra. ### Groups of Lie type A group of Lie type is a group closely related to the group G(k) of rational points of a reductive linear algebraic group G with values in the field k. Finite groups of Lie type give the bulk of nonabelian finite simple groups. Special cases include the classical groups, the Chevalley groups, the Steinberg groups, and the Suzuki–Ree groups. Finite groups of Lie type were among the first groups to be considered in mathematics, after cyclic, symmetric and alternating groups, with the projective special linear groups over prime finite fields, PSL(2, p) being constructed by Évariste Galois in the 1830s.
https://en.wikipedia.org/wiki/Finite_group
Special cases include the classical groups, the Chevalley groups, the Steinberg groups, and the Suzuki–Ree groups. Finite groups of Lie type were among the first groups to be considered in mathematics, after cyclic, symmetric and alternating groups, with the projective special linear groups over prime finite fields, PSL(2, p) being constructed by Évariste Galois in the 1830s. The systematic exploration of finite groups of Lie type started with Camille Jordan's theorem that the projective special linear group PSL(2, q) is simple for q ≠ 2, 3. This theorem generalizes to projective groups of higher dimensions and gives an important infinite family PSL(n, q) of finite simple groups. Other classical groups were studied by Leonard Dickson in the beginning of 20th century. In the 1950s Claude Chevalley realized that after an appropriate reformulation, many theorems about semisimple Lie groups admit analogues for algebraic groups over an arbitrary field k, leading to construction of what are now called Chevalley groups. Moreover, as in the case of compact simple Lie groups, the corresponding groups turned out to be almost simple as abstract groups (Tits simplicity theorem).
https://en.wikipedia.org/wiki/Finite_group
In the 1950s Claude Chevalley realized that after an appropriate reformulation, many theorems about semisimple Lie groups admit analogues for algebraic groups over an arbitrary field k, leading to construction of what are now called Chevalley groups. Moreover, as in the case of compact simple Lie groups, the corresponding groups turned out to be almost simple as abstract groups (Tits simplicity theorem). Although it was known since 19th century that other finite simple groups exist (for example, Mathieu groups), gradually a belief formed that nearly all finite simple groups can be accounted for by appropriate extensions of Chevalley's construction, together with cyclic and alternating groups. Moreover, the exceptions, the sporadic groups, share many properties with the finite groups of Lie type, and in particular, can be constructed and characterized based on their geometry in the sense of Tits. The belief has now become a theorem – the classification of finite simple groups. Inspection of the list of finite simple groups shows that groups of Lie type over a finite field include all the finite simple groups other than the cyclic groups, the alternating groups, the Tits group, and the 26 sporadic simple groups. ## Main theorems ### Lagrange's theorem For any finite group G, the order (number of elements) of every subgroup H of G divides the order of G. The theorem is named after Joseph-Louis Lagrange.
https://en.wikipedia.org/wiki/Finite_group
## Main theorems ### Lagrange's theorem For any finite group G, the order (number of elements) of every subgroup H of G divides the order of G. The theorem is named after Joseph-Louis Lagrange. ### Sylow theorems This provides a partial converse to Lagrange's theorem giving information about how many subgroups of a given order are contained in G. ### Cayley's theorem Cayley's theorem, named in honour of Arthur Cayley, states that every group G is isomorphic to a subgroup of the symmetric group acting on G. This can be understood as an example of the group action of G on the elements of G. ### Burnside's theorem Burnside's theorem in group theory states that if G is a finite group of order pq, where p and q are prime numbers, and a and b are non-negative integers, then G is solvable. Hence each non-Abelian finite simple group has order divisible by at least three distinct primes. ### Feit–Thompson theorem The Feit–Thompson theorem, or odd order theorem, states that every finite group of odd order is solvable. It was proved by
https://en.wikipedia.org/wiki/Finite_group
### Feit–Thompson theorem The Feit–Thompson theorem, or odd order theorem, states that every finite group of odd order is solvable. It was proved by ### Classification of finite simple groups The classification of finite simple groups is a theorem stating that every finite simple group belongs to one of the following families: - A cyclic group with prime order; - An alternating group of degree at least 5; - A simple group of Lie type; - One of the 26 sporadic simple groups; - The Tits group (sometimes considered as a 27th sporadic group). The finite simple groups can be seen as the basic building blocks of all finite groups, in a way reminiscent of the way the prime numbers are the basic building blocks of the natural numbers. The Jordan–Hölder theorem is a more precise way of stating this fact about finite groups. However, a significant difference with respect to the case of integer factorization is that such "building blocks" do not necessarily determine uniquely a group, since there might be many non-isomorphic groups with the same composition series or, put in another way, the extension problem does not have a unique solution. The proof of the theorem consists of tens of thousands of pages in several hundred journal articles written by about 100 authors, published mostly between 1955 and 2004.
https://en.wikipedia.org/wiki/Finite_group
However, a significant difference with respect to the case of integer factorization is that such "building blocks" do not necessarily determine uniquely a group, since there might be many non-isomorphic groups with the same composition series or, put in another way, the extension problem does not have a unique solution. The proof of the theorem consists of tens of thousands of pages in several hundred journal articles written by about 100 authors, published mostly between 1955 and 2004. Gorenstein (d.1992), Lyons, and Solomon are gradually publishing a simplified and revised version of the proof. ## Number of groups of a given order Given a positive integer n, it is not at all a routine matter to determine how many isomorphism types of groups of order n there are. Every group of prime order is cyclic, because Lagrange's theorem implies that the cyclic subgroup generated by any of its non-identity elements is the whole group. If n is the square of a prime, then there are exactly two possible isomorphism types of group of order n, both of which are abelian. If n is a higher power of a prime, then results of Graham Higman and Charles Sims give asymptotically correct estimates for the number of isomorphism types of groups of order n, and the number grows very rapidly as the power increases.
https://en.wikipedia.org/wiki/Finite_group
If n is the square of a prime, then there are exactly two possible isomorphism types of group of order n, both of which are abelian. If n is a higher power of a prime, then results of Graham Higman and Charles Sims give asymptotically correct estimates for the number of isomorphism types of groups of order n, and the number grows very rapidly as the power increases. Depending on the prime factorization of n, some restrictions may be placed on the structure of groups of order n, as a consequence, for example, of results such as the Sylow theorems. For example, every group of order pq is cyclic when are primes with not divisible by q. For a necessary and sufficient condition, see cyclic number. If n is squarefree, then any group of order n is solvable. Burnside's theorem, proved using group characters, states that every group of order n is solvable when n is divisible by fewer than three distinct primes, i.e. if , where p and q are prime numbers, and a and b are non-negative integers. By the Feit–Thompson theorem, which has a long and complicated proof, every group of order n is solvable when n is odd. For every positive integer n, most groups of order n are solvable.
https://en.wikipedia.org/wiki/Finite_group
By the Feit–Thompson theorem, which has a long and complicated proof, every group of order n is solvable when n is odd. For every positive integer n, most groups of order n are solvable. To see this for any particular order is usually not difficult (for example, there is, up to isomorphism, one non-solvable group and 12 solvable groups of order 60) but the proof of this for all orders uses the classification of finite simple groups. For any positive integer n there are at most two simple groups of order n, and there are infinitely many positive integers n for which there are two non-isomorphic simple groups of order n. ### Table of distinct groups of order n Order n # Groups Abelian Non-Abelian 0 0 0 0 1 1 1 0 2 1 1 0 3 1 1 0 4 2 2 0 5 1 1 0 6 2 1 1 7 1 1 0 8 5 3 2 9 2 2 0 10 2 1 1 11 1 1 0 12 5 2 3 13 1 1 0 14 2 1 1 15 1 1 0 16 14 5 9 17 1 1 0 18 5 2 3 19 1 1 0 20 5 2 3 21 2 1 1 22 2 1 1 23 1 1 0 24 15 3 12 25 2 2 0 26 2 1 1 27 5 3 2 28 4 2 2 29 1 1 0 30 4 1 3
https://en.wikipedia.org/wiki/Finite_group
Software versioning is the process of assigning either unique version names or unique version numbers to unique states of computer software. Within a given version number category (e.g., major or minor), these numbers are generally assigned in increasing order and correspond to new developments in the software. At a fine-grained level, revision control is used for keeping track of incrementally-different versions of information, whether or not this information is computer software, in order to be able to roll any changes back. Modern computer software is often tracked using two different software versioning schemes: an internal version number that may be incremented many times in a single day, such as a revision control number, and a release version that typically changes far less often, such as semantic versioning or a project code name. ## History File numbers were used especially in public administration, as well as companies, to uniquely identify files or cases. For computer files this practice was introduced for the first time with MIT's ITS file system, later the TENEX filesystem for the PDP-10 in 1972. Later lists of files including their versions were added, and dependencies amongst them. Linux distributions like Debian, with its dpkg, early on created package management software which could resolve dependencies between their packages. Debian's first try was that a package knew other packages which depended on it. From 1994 on this idea was inverted, so a package that knew the packages it needed.
https://en.wikipedia.org/wiki/Software_versioning
Debian's first try was that a package knew other packages which depended on it. From 1994 on this idea was inverted, so a package that knew the packages it needed. When installing a package, dependency resolution was used to automatically calculate the packages needed as well, and install them with the desired package. To facilitate upgrades, minimum package versions were introduced. Thus the numbering scheme needed to tell which version was newer than the required one.[<https://lists.debian.org/debian-devel/1995/07/msg00085.html Bug#1167: ELF development packages fail or have missing dependencies], Ian Jackson, 1995-07-30. ## Schemes A variety of version numbering schemes have been created to keep track of different versions of a piece of software. The ubiquity of computers has also led to these schemes being used in contexts outside computing. ### Sequence-based identifiers In sequence-based software versioning schemes, each software release is assigned a unique identifier that consists of one or more sequences of numbers or letters. This is the extent of the commonality; schemes vary widely in areas such as the number of sequences, the attribution of meaning to individual sequences, and the means of incrementing the sequences.
https://en.wikipedia.org/wiki/Software_versioning
### Sequence-based identifiers In sequence-based software versioning schemes, each software release is assigned a unique identifier that consists of one or more sequences of numbers or letters. This is the extent of the commonality; schemes vary widely in areas such as the number of sequences, the attribution of meaning to individual sequences, and the means of incrementing the sequences. #### Change significance In some schemes, sequence-based identifiers are used to convey the significance of changes between releases. Changes are classified by significance level, and the decision of which sequence to change between releases is based on the significance of the changes from the previous release, whereby the first sequence is changed for the most significant changes, and changes to sequences after the first represent changes of decreasing significance. Depending on the scheme, significance may be assessed by lines of code changed, function points added or removed, the potential impact on customers in terms of work required to adopt a new version, risk of bugs or undeclared breaking changes, degree of changes in visual layout, the number of new features, or almost anything the product developers or marketers deem to be significant, including marketing desire to stress the "relative goodness" of the new version. ==== Semantic versioning ==== (aka SemVer) is a widely-adopted version scheme that encodes a version by a three-part version number (Major.
https://en.wikipedia.org/wiki/Software_versioning
Depending on the scheme, significance may be assessed by lines of code changed, function points added or removed, the potential impact on customers in terms of work required to adopt a new version, risk of bugs or undeclared breaking changes, degree of changes in visual layout, the number of new features, or almost anything the product developers or marketers deem to be significant, including marketing desire to stress the "relative goodness" of the new version. ==== Semantic versioning ==== (aka SemVer) is a widely-adopted version scheme that encodes a version by a three-part version number (Major. Minor.Patch), an optional pre-release tag, and an optional build meta tag. In this scheme, risk and functionality are the measures of significance. Breaking changes are indicated by increasing the major number (high risk); new, non-breaking features increment the minor number (medium risk); and all other non-breaking changes increment the patch number (lowest risk). The presence of a pre-release tag (-alpha, -beta) indicates substantial risk, as does a major number of zero (0.y.z), which is used to indicate a work-in-progress that may contain any level of potentially breaking changes (highest risk). As an example of inferring compatibility from a SemVer version, software which relies on version 2.1.5 of an API is compatible with version 2.2.3, but not necessarily with 3.2.4.
https://en.wikipedia.org/wiki/Software_versioning
The presence of a pre-release tag (-alpha, -beta) indicates substantial risk, as does a major number of zero (0.y.z), which is used to indicate a work-in-progress that may contain any level of potentially breaking changes (highest risk). As an example of inferring compatibility from a SemVer version, software which relies on version 2.1.5 of an API is compatible with version 2.2.3, but not necessarily with 3.2.4. Developers may choose to jump multiple minor versions at a time to indicate that significant features have been added, but are not enough to warrant incrementing a major version number; for example, Internet Explorer 5 from 5.1 to 5.5 or Adobe Photoshop 5 to 5.5. This may be done to emphasize the value of the upgrade to the software user or, as in Adobe's case, to represent a release halfway between major versions (although levels of sequence-based versioning are not necessarily limited to a single digit, as in Blender version 2.91 or Minecraft Java Edition starting from 1.7.10). A different approach is to use the major and minor numbers along with an alphanumeric string denoting the release type, e.g. "alpha" (a), "beta" (b), or "release candidate" (rc).
https://en.wikipedia.org/wiki/Software_versioning
This may be done to emphasize the value of the upgrade to the software user or, as in Adobe's case, to represent a release halfway between major versions (although levels of sequence-based versioning are not necessarily limited to a single digit, as in Blender version 2.91 or Minecraft Java Edition starting from 1.7.10). A different approach is to use the major and minor numbers along with an alphanumeric string denoting the release type, e.g. "alpha" (a), "beta" (b), or "release candidate" (rc). A software release train using this approach might look like 0.5, 0.6, 0.7, 0.8, 0.9 → 1.0b1, 1.0b2 (with some fixes), 1.0b3 (with more fixes) → 1.0rc1 (which, if it is stable enough), 1.0rc2 (if more bugs are found) → 1.0. It is a common practice in this scheme to lock out new features and breaking changes during the release candidate phases and, for some teams, even betas are locked down to bug fixes only, to ensure convergence on the target release. ## Other schemes impart meaning on individual sequences: major.minor[.build[.revision]] (example: 1.2.12.102) major.minor[.maintenance[.build]] (example: 1.4.3.5249)
https://en.wikipedia.org/wiki/Software_versioning
## Other schemes impart meaning on individual sequences: major.minor[.build[.revision]] (example: 1.2.12.102) major.minor[.maintenance[.build]] (example: 1.4.3.5249) Again, in these examples, the definition of what constitutes a "major" as opposed to a "minor" change is entirely subjective and up to the author, as is what defines a "build", or how a "revision" differs from a "minor" change. Shared libraries in Solaris and Linux may use the current.revision.age format where: current: The most recent interface number that the library implements. revision: The implementation number of the current interface. age: The difference between the newest and oldest interfaces that the library implements. This use of the third field is specific to libtool: others may use a different meaning or simply ignore it. A similar problem of relative change significance and versioning nomenclature exists in book publishing, where edition numbers or names can be chosen based on varying criteria. In most proprietary software, the first released version of a software product has version 1. #### Degree of compatibility Some projects use the major version number to indicate incompatible releases. Two examples are Apache Portable Runtime (APR) and the FarCry CMS.
https://en.wikipedia.org/wiki/Software_versioning
#### Degree of compatibility Some projects use the major version number to indicate incompatible releases. Two examples are Apache Portable Runtime (APR) and the FarCry CMS. Often programmers write new software to be backward compatible, i.e., the new software is designed to interact correctly with older versions of the software (using old protocols and file formats) and the most recent version (using the latest protocols and file formats). For example, IBM z/OS is designed to work properly with 3 consecutive major versions of the operating system running in the same sysplex. This enables people who run a high availability computer cluster to keep most of the computers up and running while one machine at a time is shut down, upgraded, and restored to service. Often packet headers and file format include a version number – sometimes the same as the version number of the software that wrote it; other times a "protocol version number" independent of the software version number. The code to handle old deprecated protocols and file formats is often seen as cruft. #### Designating development stage Software in the experimental stage (alpha or beta) often uses a zero in the first ("major") position of the sequence to designate its status. However, this scheme is only useful for the early stages, not for upcoming releases with established software where the version number has already progressed past 0.
https://en.wikipedia.org/wiki/Software_versioning
#### Designating development stage Software in the experimental stage (alpha or beta) often uses a zero in the first ("major") position of the sequence to designate its status. However, this scheme is only useful for the early stages, not for upcoming releases with established software where the version number has already progressed past 0. A number of schemes are used to denote the status of a newer release: - Alphanumeric suffix is a common scheme adopted by semantic versioning. In this scheme, versions have affixed a dash plus some alphanumeric characters to indicate the status. - Numeric status is a scheme that uses numbers to indicate the status as if it's part of the sequence. A typical choice is the third position for the four-position versioning. - Numeric 90+ is another scheme that uses numbers, but apparently under a number of a previous version. A large number in the last position, typically 90 or higher, is used. This is commonly used by older open-source projects like Fontconfig. + Comparison of development stage indicators Developmentstage Semanticversioning Numericstatus Numeric90+ Alpha 1.2.0-a.1 1.2.0.1 1.1.90 Beta 1.2.0-b.2 1.2.1.2 1.1.93 Release candidate (RC) 1.2.0-rc.3 1.2.2.3 1.1.97 Release 1.2.0 1.2.3.0 1.2.0 Post-release fixes 1.2.5 1.2.3.5 1.2.5
https://en.wikipedia.org/wiki/Software_versioning
+ Comparison of development stage indicators Developmentstage Semanticversioning Numericstatus Numeric90+ Alpha 1.2.0-a.1 1.2.0.1 1.1.90 Beta 1.2.0-b.2 1.2.1.2 1.1.93 Release candidate (RC) 1.2.0-rc.3 1.2.2.3 1.1.97 Release 1.2.0 1.2.3.0 1.2.0 Post-release fixes 1.2.5 1.2.3.5 1.2.5 The two purely numeric forms remove the special logic required to handle the comparison of "alpha < beta < rc < no prefix" as found in semantic versioning, at the cost of clarity. #### Incrementing sequences There are two schools of thought regarding how numeric version numbers are incremented. Most free and open-source software packages, including MediaWiki, treat versions as a series of individual numbers, separated by periods, with a progression such as 1.7.0, 1.8.0, 1.8.1, 1.9.0, 1.10.0, 1.11.0, 1.11.1, 1.11.2, and so on. On the other hand, some software packages identify releases by decimal numbers: 1.7, 1.8, 1.81, 1.82, 1.9, etc. Decimal versions were common in the 1980s, for example with NetWare, DOS, and ### Microsoft Windows , but even in the 2000s have been for example used by Opera and Movable Type.
https://en.wikipedia.org/wiki/Software_versioning
Decimal versions were common in the 1980s, for example with NetWare, DOS, and ### Microsoft Windows , but even in the 2000s have been for example used by Opera and Movable Type. In the decimal scheme, 1.81 is the minor version following 1.8, while maintenance releases (i.e. bug fixes only) may be denoted with an alphabetic suffix, such as 1.81a or 1.81b. The standard GNU version numbering scheme is major.minor.revision, but Emacs is a notable example using another scheme where the major number (1) was dropped and a user site revision was added which is always zero in original Emacs packages but increased by distributors. Similarly, Debian package numbers are prefixed with an optional "epoch", which is used to allow the versioning scheme to be changed. #### Resetting In some cases, developers may decide to reset the major version number. This is sometimes used to denote a new development phase being released. For example, Minecraft Alpha ran from version 1.0.0 to 1.2.6, and when Beta was released, it reset the major version number and ran from 1.0 to 1.8. Once the game was fully released, the major version number again reset to 1.0.0. #### Separating sequences When printed, the sequences may be separated with characters. The choice of characters and their usage varies by the scheme.
https://en.wikipedia.org/wiki/Software_versioning
#### Separating sequences When printed, the sequences may be separated with characters. The choice of characters and their usage varies by the scheme. The following list shows hypothetical examples of separation schemes for the same release (the thirteenth third-level revision to the fourth second-level revision to the second first-level revision): - A scheme may use the same character between all sequences: 2.4.13, 2/4/13, 2-4-13 - A scheme choice of which sequences to separate may be inconsistent, separating some sequences but not others: 2.413 - A scheme's choice of characters may be inconsistent within the same identifier: 2.4_13 (for instance, Minecraft Beta incremented from 1.7 to 1.7_01 to 1.7.2) When a period is used to separate sequences, it may or may not represent a decimal point—see "Incrementing sequences" section for various interpretation styles. #### Number of sequences There is sometimes a fourth, unpublished number which denotes the software build (as used by Microsoft). Adobe Flash is a notable case where a four-part version number is indicated publicly, as in 10.1.53.64. Some companies also include the build date. Version numbers may also include letters and other characters, such as Lotus 1-2-3 Release 1a. #### Negative numbers Some projects use negative version numbers.
https://en.wikipedia.org/wiki/Software_versioning
Version numbers may also include letters and other characters, such as Lotus 1-2-3 Release 1a. #### Negative numbers Some projects use negative version numbers. One example is the SmartEiffel compiler which started from −1.0 and counted upwards to 0.0. ### Date of release Many projects use a date-based versioning scheme called Calendar Versioning (aka CalVer). Ubuntu is one example of a project using calendar versioning; Ubuntu 18.04, for example, was released in April 2018. This has the advantage of being easily relatable to development schedules and support timelines. Some video games also use date as versioning, for example the arcade game Street Fighter EX. At startup it displays the version number as a date plus a region code, for example 961219 ASIA. When using dates in versioning, for instance, file names, it is common to use the ISO 8601 scheme YYYY-MM-DD, as this is easily string-sorted in increasing or decreasing order. The hyphens are sometimes omitted. The Wine project formerly used a date versioning scheme, which used the year followed by the month followed by the day of the release; for example, "Wine 20040505". Minecraft had a similar version formatting, but instead used DDHHMM, ex: rd-132211, 13 being the 13th of May, and 2211 being 22:11.
https://en.wikipedia.org/wiki/Software_versioning
The Wine project formerly used a date versioning scheme, which used the year followed by the month followed by the day of the release; for example, "Wine 20040505". Minecraft had a similar version formatting, but instead used DDHHMM, ex: rd-132211, 13 being the 13th of May, and 2211 being 22:11. Microsoft Office build numbers are an encoded date: the first two digits indicate the number of months that have passed from the January of the year in which the project started (with each major Office release being a different project), while the last two digits indicate the day of that month. So 3419 is the 19th day of the 34th month after the month of January of the year the project started. Other examples that identify versions by year include Adobe Illustrator 88 and WordPerfect Office 2003. When a year is used to denote version, it is generally for marketing purposes, and an actual version number also exists. For example, Windows 95 is internally versioned as MS-DOS 7.00 and Windows 4.00; likewise, Windows 2000 is internally versioned as NT 5.0. ## Software examples ### Python The Python Software Foundation has published PEP 440 – Version Identification and Dependency Specification, outlining their own flexible scheme, that defines an epoch segment, a release segment, pre-release and post-release segments and a development release segment.
https://en.wikipedia.org/wiki/Software_versioning
## Software examples ### Python The Python Software Foundation has published PEP 440 – Version Identification and Dependency Specification, outlining their own flexible scheme, that defines an epoch segment, a release segment, pre-release and post-release segments and a development release segment. ### TeX TeX has an idiosyncratic version numbering system, an unusual feature invented by its developer Donald Knuth. Since version 3.1, updates have been indicated by adding an extra digit at the end, so that the version number asymptotically approaches the number , so 3.14 effectively means 3.2 in semantic versioning. (This is a form of unary numbering; the version number is the number of digits.) Since 2021, the version number has been 3.141592653 (3.9). This is a reflection of TeX being very stable, and only minor updates are anticipated. TeX developer Donald Knuth has stated that the "absolutely final change (to be made after [his] death)" will be to change the version number to , at which point all remaining bugs will become permanent features. In a similar way, the version number of Metafont asymptotically approaches Euler's number,. As of February 2021, the version number is 2.71828182 (2.8). Metafont was also devised by Donald Knuth as a companion to his TeX typesetting system.
https://en.wikipedia.org/wiki/Software_versioning
As of February 2021, the version number is 2.71828182 (2.8). Metafont was also devised by Donald Knuth as a companion to his TeX typesetting system. ### Apple During the era of the classic Mac OS, minor version numbers rarely went beyond ".1". When they did, they usually jumped straight to ".5", suggesting the release was "more significant". Thus, "8.5" was marketed as its own release, representing "Mac OS 8 and a half", and 8.6 effectively meant "8.5.1". Mac OS X departed from this trend, in large part because "X" (the Roman numeral for 10) was in the name of the product. As a result, all versions of OS X began with the number 10. The first major release of OS X was given the version number 10.0, but the next major release was not 11.0. Instead, it was numbered 10.1, followed by 10.2, 10.3, and so on for each subsequent major release. Thus the 11th major version of OS X was labeled "10.10". Even though the "X" was dropped from the name as of macOS 10.12, this numbering scheme continued through macOS 10.15.
https://en.wikipedia.org/wiki/Software_versioning
Thus the 11th major version of OS X was labeled "10.10". Even though the "X" was dropped from the name as of macOS 10.12, this numbering scheme continued through macOS 10.15. Under the "X"-based versioning scheme, the third number (instead of the second) denoted a minor release, and additional updates below this level, as well as updates to a given major version of OS X coming after the release of a new major version, were titled Supplemental Updates. The Roman numeral X was concurrently leveraged for marketing purposes across multiple product lines. Both QuickTime and Final Cut Pro jumped from version 7 directly to version 10, QuickTime X and Final Cut Pro X. Like Mac OS X itself, the products were not upgrades to previous versions, but brand-new programs. As with OS X, major releases for these programs incremented the second digit and minor releases were denoted using a third digit. The "X" was dropped from Final Cut's name with the release of macOS 11.0 (see below), and QuickTime's branding became moot when the framework was deprecated in favor of AVFoundation in 2011 (the program for playing QuickTime video was only named QuickTime Player from the start). Apple's next macOS release, provisionally numbered 10.16, was officially announced as macOS 11 at WWDC in June 2020, and released in November 2020.
https://en.wikipedia.org/wiki/Software_versioning
The "X" was dropped from Final Cut's name with the release of macOS 11.0 (see below), and QuickTime's branding became moot when the framework was deprecated in favor of AVFoundation in 2011 (the program for playing QuickTime video was only named QuickTime Player from the start). Apple's next macOS release, provisionally numbered 10.16, was officially announced as macOS 11 at WWDC in June 2020, and released in November 2020. The following macOS version, macOS Monterey, was released in October 2021 and bumped its major version number to 12. Microsoft Windows The Microsoft Windows operating system was first labelled with standard version numbers for Windows 1.0 through Windows 3.11. After this Microsoft excluded the version number from the product name. For Windows 95 (version 4.0), Windows 98 (4.10) and Windows 2000 (5.0), year of the release was included in the product title. After Windows 2000, Microsoft created the Windows Server family which continued the year-based style with a difference: For minor releases, Microsoft suffixed "R2" to the title, e.g., Windows Server 2008 R2 (version 6.1). This style had remained consistent to this date. The client versions of Windows however did not adopt a consistent style. First, they received names with arbitrary alphanumeric suffixes as with Windows Me (4.90), Windows XP (5.1), and Windows Vista (6.0).
https://en.wikipedia.org/wiki/Software_versioning
The client versions of Windows however did not adopt a consistent style. First, they received names with arbitrary alphanumeric suffixes as with Windows Me (4.90), Windows XP (5.1), and Windows Vista (6.0). Then, once again Microsoft adopted incremental numbers in the title, but this time, they were not versioning numbers; the version numbers of Windows 7, Windows 8 and Windows 8.1 are respectively 6.1, 6.2 and 6.3. In Windows 10, the version number leaped to 10.0 and subsequent updates to the OS only incremented build number and update build revision (UBR) number. The successor of Windows 10, Windows 11, was released on October 5, 2021. Despite being named "11", the new Windows release didn't bump its major version number to 11. Instead, it stayed at the same version number of 10.0, used by Windows 10. Other schemes Some software producers use different schemes to denote releases of their software. The Debian project uses a major/minor versioning scheme for releases of its operating system but uses code names from the movie Toy Story during development to refer to stable, unstable, and testing releases. BLAG Linux and GNU features very large version numbers: major releases have numbers such as 50000 and 60000, while minor releases increase the number by 1 (e.g. 50001, 50002).
https://en.wikipedia.org/wiki/Software_versioning
The Debian project uses a major/minor versioning scheme for releases of its operating system but uses code names from the movie Toy Story during development to refer to stable, unstable, and testing releases. BLAG Linux and GNU features very large version numbers: major releases have numbers such as 50000 and 60000, while minor releases increase the number by 1 (e.g. 50001, 50002). Alpha and beta releases are given decimal version numbers slightly less than the major release number, such as 19999.00071 for alpha 1 of version 20000, and 29999.50000 for beta 2 of version 30000. Starting at 9001 in 2003, the most recent version As of 2011 is 140000. Urbit uses Kelvin versioning (named after the absolute Kelvin temperature scale): software versions start at a high number and count down to version 0, at which point the software is considered finished and no further modifications are made. ## Internal version numbers Software may have an "internal" version number which differs from the version number shown in the product name (and which typically follows version numbering rules more consistently).
https://en.wikipedia.org/wiki/Software_versioning
Urbit uses Kelvin versioning (named after the absolute Kelvin temperature scale): software versions start at a high number and count down to version 0, at which point the software is considered finished and no further modifications are made. ## Internal version numbers Software may have an "internal" version number which differs from the version number shown in the product name (and which typically follows version numbering rules more consistently). Java SE 5.0, for example, has the internal version number of 1.5.0, and versions of Windows from NT 4 on have continued the standard numerical versions internally: Windows 2000 is NT 5.0, XP is Windows NT 5.1, Windows Server 2003 and Windows XP Professional x64 Edition are NT 5.2, Windows Server 2008 and Vista are NT 6.0, Windows Server 2008 R2 and Windows 7 are NT 6.1, Windows Server 2012 and Windows 8 are NT 6.2, and Windows Server 2012 R2 and Windows 8.1 are NT 6.3. Windows 10 was initially intended to be NT 6.4, as the earliest Technical Preview build shared to the public is numbered 6.4.9841. However, that did not last as the version of Windows 10 was quickly artificially increased to 10.0 to align with the commercial name, resulting in the first released version of the operating system being numbered 10.0.10240. Note, however, that Windows NT is only on its fifth major revision, as its first release was numbered 3.1 (to match the then-current Windows release number) and the Windows 10 launching made a version leap from 6.3 to 10.0.
https://en.wikipedia.org/wiki/Software_versioning
However, that did not last as the version of Windows 10 was quickly artificially increased to 10.0 to align with the commercial name, resulting in the first released version of the operating system being numbered 10.0.10240. Note, however, that Windows NT is only on its fifth major revision, as its first release was numbered 3.1 (to match the then-current Windows release number) and the Windows 10 launching made a version leap from 6.3 to 10.0. ## Pre-release versions In conjunction with the various versioning schemes listed above, a system for denoting pre-release versions is generally used, as the program makes its way through the stages of the software release life cycle. Programs that are in an early stage are often called "alpha" software, after the first letter in the Greek alphabet. After they mature but are not yet ready for release, they may be called "beta" software, after the second letter in the Greek alphabet. Generally alpha software is tested by developers only, while beta software is distributed for community testing. Some systems use numerical versions less than 1 (such as 0.9), to suggest their approach toward a final "1.0" release. This is a common convention in open source software. However, if the pre-release version is for an existing software package (e.g. version 2.5), then an "a" or "alpha" may be appended to the version number.
https://en.wikipedia.org/wiki/Software_versioning
This is a common convention in open source software. However, if the pre-release version is for an existing software package (e.g. version 2.5), then an "a" or "alpha" may be appended to the version number. So the alpha version of the 2.5 release might be identified as 2.5a or 2.5.a. An alternative is to refer to pre-release versions as "release candidates", so that software packages which are soon to be released as a particular version may carry that version tag followed by "rc-#", indicating the number of the release candidate; when the final version is released, the "rc" tag is removed. ## Release train A software release train is a form of software release schedule in which a number of distinct series of versioned software releases for multiple products are released as a number of different "trains" on a regular schedule. Generally, for each product line, a number of different release trains are running at a given time, with each train moving from initial release to eventual maturity and retirement on a planned schedule. Users may experiment with a newer release train before adopting it for production, allowing them to experiment with newer, "raw", releases early, while continuing to follow the previous train's point releases for their production systems prior to moving to the new release train as it becomes mature. Cisco's IOS software platform used a release train schedule with many distinct trains for many years.
https://en.wikipedia.org/wiki/Software_versioning
Users may experiment with a newer release train before adopting it for production, allowing them to experiment with newer, "raw", releases early, while continuing to follow the previous train's point releases for their production systems prior to moving to the new release train as it becomes mature. Cisco's IOS software platform used a release train schedule with many distinct trains for many years. More recently, a number of other platforms including Firefox and Fenix for Android, Eclipse, LibreOffice, Ubuntu, Fedora, Python, digiKam and VMware have adopted the release train model. ## Modifications to the numeric system ### Odd-numbered versions for development releases Between the 1.0 and the 2.6.x series, the Linux kernel used odd minor version numbers to denote development releases and even minor version numbers to denote stable releases. For example, Linux 2.3 was a development family of the second major design of the Linux kernel, and Linux 2.4 was the stable release family that Linux 2.3 matured into. After the minor version number in the Linux kernel is the release number, in ascending order; for example, Linux 2.4.0 → Linux 2.4.22. Since the 2004 release of the 2.6 kernel, Linux no longer uses this system, and has a much shorter release cycle. The same odd-even system is used by some other software with long release cycles, such as Node.js up to version 0.12 as well as WineHQ.
https://en.wikipedia.org/wiki/Software_versioning
Since the 2004 release of the 2.6 kernel, Linux no longer uses this system, and has a much shorter release cycle. The same odd-even system is used by some other software with long release cycles, such as Node.js up to version 0.12 as well as WineHQ. ### Dropping the most significant element Sun's Java has at times had a hybrid system, where the internal version number has always been 1.x but has been marketed by reference only to the x: - JDK 1.0.3 - JDK 1.1.2 through 1.1.8 - J2SE 1.2.0 ("Java 2") through 1.4.2 - Java 1.5.0, 1.6.0, 1.7.0, 1.8.0 ("Java 5, 6, 7, 8") Sun also dropped the first digit for Solaris, where Solaris 2.8 (or 2.9) is referred to as Solaris 8 (or 9) in marketing materials. A similar jump took place with the Asterisk open-source PBX construction kit in the early 2010s, whose project leads announced that the current version 1.8.x would soon be followed by version 10. This approach, panned by many because it breaks the semantic significance of the sections of the version number, has been adopted by an increasing number of vendors including Mozilla (for Firefox). ## Version number ordering systems Version numbers very quickly evolve from simple integers (1, 2, ...) to rational numbers (2.08, 2.09, 2.10) and then to non-numeric "numbers" such as 4:3.4.3-2.
https://en.wikipedia.org/wiki/Software_versioning
## Version number ordering systems Version numbers very quickly evolve from simple integers (1, 2, ...) to rational numbers (2.08, 2.09, 2.10) and then to non-numeric "numbers" such as 4:3.4.3-2. These complex version numbers are therefore better treated as character strings. Operating systems that include package management facilities (such as all non-trivial Linux or BSD distributions) will use a distribution-specific algorithm for comparing version numbers of different software packages. For example, the ordering algorithms of Red Hat and derived distributions differ to those of the Debian-like distributions. As an example of surprising version number ordering implementation behavior, in Debian, leading zeroes are ignored in chunks, so that 5.0005 and 5.5 are considered as equal, and 5.5<5.0006. This can confuse users; string-matching tools may fail to find a given version number; and this can cause subtle bugs in package management if the programmers use string-indexed data structures such as version-number indexed hash tables. To ease sorting, some software packages represent each component of the major.minor.release scheme with a fixed width. Perl represents its version numbers as a floating-point number; for example, Perl's 5.8.7 release can also be represented as 5.008007. This allows a theoretical version of 5.8.10 to be represented as 5.008010.
https://en.wikipedia.org/wiki/Software_versioning
Perl represents its version numbers as a floating-point number; for example, Perl's 5.8.7 release can also be represented as 5.008007. This allows a theoretical version of 5.8.10 to be represented as 5.008010. Other software packages pack each segment into a fixed bit width; for example, on Microsoft Windows, version number 6.3.9600.16384 would be represented as hexadecimal 0x0006000325804000. The floating-point scheme breaks down if any segment of the version number exceeds 999; a packed-binary scheme employing 16 bits apiece breaks down after 65535. ## Political and cultural significance of version numbers ### Version 1.0 as a milestone The free-software and open source communities tend to release software early and often. Initial versions are numbers less than 1, with these 0.x version used to convey that the software is incomplete and not reliable enough for general release or usable in its current state. Backward-incompatible changes are common with 0.x versions. Version 1.0 is used as a major milestone, indicating that the software has at least all major features plus functions the developers wanted to get into that version, and is considered reliable enough for general release. A good example of this is the Linux kernel, which was first released as version 0.01 in 1991, and took until 1994 to reach version 1.0.0.
https://en.wikipedia.org/wiki/Software_versioning
Version 1.0 is used as a major milestone, indicating that the software has at least all major features plus functions the developers wanted to get into that version, and is considered reliable enough for general release. A good example of this is the Linux kernel, which was first released as version 0.01 in 1991, and took until 1994 to reach version 1.0.0. The developers of the arcade game emulator MAME do not ever intend to release a version 1.0 of the program because there will always be more arcade games to emulate and thus the project can never be truly completed. Accordingly, version 0.99 was followed by version 0.100. Since the internet has become widespread, most commercial software vendors no longer follow the maxim that a major version should be "complete" and instead rely on patches with bugfixes to sort out the known issues which a solution has been found for and could be fixed. ### Version numbers as marketing A relatively common practice is to make major jumps in version numbers for marketing reasons. Sometimes software vendors just bypass the 1.0 release or quickly release a release with a subsequent version number because 1.0 software is considered by many customers too immature to trust with production deployments. For example, as in the case of dBase II, a product is launched with a version number that implies that it is more mature than it is. Other times version numbers are increased to match those of competitors.
https://en.wikipedia.org/wiki/Software_versioning
For example, as in the case of dBase II, a product is launched with a version number that implies that it is more mature than it is. Other times version numbers are increased to match those of competitors. This can be seen in many examples of product version numbering by Microsoft, America Online, Sun Solaris, Java Virtual Machine, SCO Unix, WordPerfect. Microsoft Access jumped from version 2.0 to version 7.0, to match the version number of Microsoft Word. Microsoft has also been the target of "catch-up" versioning, with the Netscape browsers skipping version 5 to 6, in line with Microsoft's Internet Explorer, but also because the Mozilla application suite inherited version 5 in its user agent string during pre-1.0 development and Netscape 6.x was built upon Mozilla's code base. Another example of keeping up with competitors is when Slackware Linux jumped from version 4 to version 7 in 1999. ### Superstition - The Office 2007 release of Microsoft Office had an internal version number of 12. The next version, Office 2010, has an internal version of 14, due to superstitions surrounding the number 13.
https://en.wikipedia.org/wiki/Software_versioning
### Superstition - The Office 2007 release of Microsoft Office had an internal version number of 12. The next version, Office 2010, has an internal version of 14, due to superstitions surrounding the number 13. Visual Studio 2013 is Version number 12.0 of the product, and the new version, Visual Studio 2015 has the Version number 14.0 for the same reasons. - Roxio Toast went from version 12 to version 14, likely in an effort to skip the superstitions surrounding the number 13. - Corel's WordPerfect Office, version 13 is marketed as "X3" (Roman number 10 and "3"). The procedure has continued into the next version, X4. The same has happened with Corel's Graphic Suite (i.e. CorelDRAW, Corel Photo-Paint) as well as its video editing software "Video Studio". - Sybase skipped major versions 13 and 14 in its Adaptive Server Enterprise relational database product, moving from 12.5 to 15.0. - ABBYY Lingvo Dictionary uses numbering 12, x3 (14), x5 (15). - SUSE Linux Enterprise skipped version 13 and 14 after version 12 and directly released SLES 15 in July 2018. ### Geek culture - The SUSE Linux distribution started at version 4.2, to reference 42, "the answer to the ultimate question of life, the universe and everything" mentioned in Douglas Adams' The Hitchhiker's Guide to the Galaxy. - A Slackware Linux distribution was versioned 13.37, referencing leet.
https://en.wikipedia.org/wiki/Software_versioning
The same has happened with Corel's Graphic Suite (i.e. CorelDRAW, Corel Photo-Paint) as well as its video editing software "Video Studio". - Sybase skipped major versions 13 and 14 in its Adaptive Server Enterprise relational database product, moving from 12.5 to 15.0. - ABBYY Lingvo Dictionary uses numbering 12, x3 (14), x5 (15). - SUSE Linux Enterprise skipped version 13 and 14 after version 12 and directly released SLES 15 in July 2018. ### Geek culture - The SUSE Linux distribution started at version 4.2, to reference 42, "the answer to the ultimate question of life, the universe and everything" mentioned in Douglas Adams' The Hitchhiker's Guide to the Galaxy. - A Slackware Linux distribution was versioned 13.37, referencing leet. - Finnix skipped from version 93.0 to 100, partly to fulfill the assertion, "There Will Be No Finnix '95", a reference to Windows 95. - The Tagged Image File Format specification has used 42 as internal version number since its inception, its designers not expecting to alter it anymore during their (or its) lifetime since it would conflict with its development directives.
https://en.wikipedia.org/wiki/Software_versioning
- Finnix skipped from version 93.0 to 100, partly to fulfill the assertion, "There Will Be No Finnix '95", a reference to Windows 95. - The Tagged Image File Format specification has used 42 as internal version number since its inception, its designers not expecting to alter it anymore during their (or its) lifetime since it would conflict with its development directives. ## Overcoming perceived marketing difficulties In the mid-1990s, the rapidly growing CMMS, Maximo, moved from Maximo Series 3 directly to Series 5, skipping Series 4 due to that number's perceived marketing difficulties in the Chinese market, where the number 4 is associated with "death" (see tetraphobia). This did not stop Maximo Series 5 version 4.0 from being released. (The "Series" versioning has since been dropped, effectively resetting version numbers after Series 5 version 1.0's release.) ## Significance ### In software engineering Version numbers are used in practical terms by the consumer, or client, to identify or compare their copy of the software product against another copy, such as the newest version released by the developer. For the programmer or company, versioning is often used on a revision-by-revision basis, where individual parts of the software are compared and contrasted with newer or older revisions of those same parts, often in a collaborative version control system.
https://en.wikipedia.org/wiki/Software_versioning
### In software engineering Version numbers are used in practical terms by the consumer, or client, to identify or compare their copy of the software product against another copy, such as the newest version released by the developer. For the programmer or company, versioning is often used on a revision-by-revision basis, where individual parts of the software are compared and contrasted with newer or older revisions of those same parts, often in a collaborative version control system. In the 21st century, more programmers started to use a formalized version policy, such as the semantic versioning policy. The purpose of such policies is to make it easier for other programmers to know when code changes are likely to break things they have written. Such policies are especially important for software libraries and frameworks, but may also be very useful for command-line applications (which may be called from other applications) and for other applications (which may be scripted and/or extended by third parties). Versioning is also a required practice to enable many schemes of patching and upgrading software, especially to automatically decide what and where to upgrade to. ### In technical support Version numbers allow people providing support to ascertain exactly which code a user is running, so that they can rule out bugs that have already been fixed as a cause of an issue, and the like. This is especially important when a program has a substantial user community, especially when that community is large enough that the people providing technical support are not the people who wrote the code.
https://en.wikipedia.org/wiki/Software_versioning
### In technical support Version numbers allow people providing support to ascertain exactly which code a user is running, so that they can rule out bugs that have already been fixed as a cause of an issue, and the like. This is especially important when a program has a substantial user community, especially when that community is large enough that the people providing technical support are not the people who wrote the code. The semantic meaning of version.revision.change style numbering is also important to information technology staff, who often use it to determine how much attention and research they need to pay to a new release before deploying it in their facility. As a rule of thumb, the bigger the changes, the larger the chances that something might break (although examining the Changelog, if any, may reveal only superficial or irrelevant changes). This is one reason for some of the distaste expressed in the "drop the major release" approach taken by Asterisk et alia: now, staff must (or at least should) do a full regression test for every update. ## Non-software use Some computer file systems, such as the OpenVMS Filesystem, also keep versions for files. Versioning amongst documents is relatively similar to the routine used with computers and software engineering, where with each small change in the structure, contents, or conditions, the version number is incremented by 1, or a smaller or larger value, again depending on the personal preference of the author and the size or importance of changes made. Software-style version numbers can be found in other media.
https://en.wikipedia.org/wiki/Software_versioning
Versioning amongst documents is relatively similar to the routine used with computers and software engineering, where with each small change in the structure, contents, or conditions, the version number is incremented by 1, or a smaller or larger value, again depending on the personal preference of the author and the size or importance of changes made. Software-style version numbers can be found in other media. In some cases, the use is a direct analogy (for example: Jackass 2.5, a version of Jackass Number Two with additional special features; the second album by Garbage, titled Version 2.0; or Dungeons & Dragons 3.5, where the rules were revised from the third edition, but not so much as to be considered the fourth). More often it's used to play on an association with high technology, and doesn't literally indicate a 'version' (e.g., Tron 2.0, a video game followup to the film Tron, or the television series The IT Crowd, which refers to the second season as Version 2.0). A particularly notable usage is Web 2.0, referring to websites from the early 2000s that emphasized user-generated content, usability and interoperability. Technical drawing and CAD software files may also use some kind of primitive versioning number to keep track of changes.
https://en.wikipedia.org/wiki/Software_versioning
In mathematics education, precalculus is a course, or a set of courses, that includes algebra and trigonometry at a level that is designed to prepare students for the study of calculus, thus the name precalculus. Schools often distinguish between algebra and trigonometry as two separate parts of the coursework. ## Concept For students to succeed at finding the derivatives and antiderivatives with calculus, they will need facility with algebraic expressions, particularly in modification and transformation of such expressions. Leonhard Euler wrote the first precalculus book in 1748 called Introductio in analysin infinitorum (Latin: Introduction to the Analysis of the Infinite), which "was meant as a survey of concepts and methods in analysis and analytic geometry preliminary to the study of differential and integral calculus." He began with the fundamental concepts of variables and functions. His innovation is noted for its use of exponentiation to introduce the transcendental functions. The general logarithm, to an arbitrary positive base, Euler presents as the inverse of an exponential function. Then the natural logarithm is obtained by taking as base "the number for which the hyperbolic logarithm is one", sometimes called Euler's number, and written $$ e $$ . This appropriation of the significant number from Grégoire de Saint-Vincent’s calculus suffices to establish the natural logarithm.
https://en.wikipedia.org/wiki/Precalculus
Then the natural logarithm is obtained by taking as base "the number for which the hyperbolic logarithm is one", sometimes called Euler's number, and written $$ e $$ . This appropriation of the significant number from Grégoire de Saint-Vincent’s calculus suffices to establish the natural logarithm. This part of precalculus prepares the student for integration of the monomial _ BLOCK1_ in the instance of $$ p = -1 $$ . Today's precalculus text computes $$ e $$ as the limit $$ e = \lim_{n \rightarrow \infty} \left(1 + \frac{1}{n}\right)^{n} $$ . An exposition on compound interest in financial mathematics may motivate this limit. Another difference in the modern text is avoidance of complex numbers, except as they may arise as roots of a quadratic equation with a negative discriminant, or in Euler's formula as application of trigonometry. Euler used not only complex numbers but also infinite series in his precalculus. Today's course may cover arithmetic and geometric sequences and series, but not the application by Saint-Vincent to gain his hyperbolic logarithm, which Euler used to finesse his precalculus.
https://en.wikipedia.org/wiki/Precalculus
Euler used not only complex numbers but also infinite series in his precalculus. Today's course may cover arithmetic and geometric sequences and series, but not the application by Saint-Vincent to gain his hyperbolic logarithm, which Euler used to finesse his precalculus. ## Variable content Precalculus prepares students for calculus somewhat differently from how pre-algebra prepares students for algebra. While pre-algebra often has extensive coverage of basic algebraic concepts, precalculus courses might see only small amounts of calculus concepts, if at all, and usually involve covering algebraic topics that might not have been given attention in earlier algebra courses. Some precalculus courses might differ from others in terms of content. For example, an honors-level course might spend more time on conic sections, Euclidean vectors, and other topics needed for calculus, used in fields such as medicine or engineering. A college preparatory/regular class might focus on topics used in business-related careers, such as matrices, or power functions. A standard course considers functions, function composition, and inverse functions, often in connection with sets and real numbers. In particular, polynomials and rational functions are developed. Algebraic skills are exercised with trigonometric functions and trigonometric identities.
https://en.wikipedia.org/wiki/Precalculus
In particular, polynomials and rational functions are developed. Algebraic skills are exercised with trigonometric functions and trigonometric identities. The binomial theorem, polar coordinates, parametric equations, and the limits of sequences and series are other common topics of precalculus. Sometimes the mathematical induction method of proof for propositions dependent upon a natural number may be demonstrated, but generally, coursework involves exercises rather than theory. ## Sample texts - Roland E. Larson & Robert P. Hostetler (1989) Precalculus, second edition, D.C. Heath and Company - Margaret L. Lial & Charles D. Miller (1988) Precalculus, Scott Foresman - Jerome E. Kaufmann (1988) Precalculus, PWS-Kent Publishing Company (Wadsworth) - Karl J. Smith (1990) Precalculus Mathematics: a functional approach, fourth edition, Brooks/Cole - Michael Sullivan (1993) Precalculus, third edition, Dellen imprint of Macmillan Publishers ### Online access - Jay Abramson and others (2014) Precalculus from OpenStax - David Lippman & Melonie Rasmussen (2017) Precalculus: an investigation of functions - Carl Stitz & Jeff Zeager (2013) Precalculus (pdf)
https://en.wikipedia.org/wiki/Precalculus
A statistical model is a mathematical model that embodies a set of statistical assumptions concerning the generation of sample data (and similar data from a larger population). A statistical model represents, often in considerably idealized form, the data-generating process. When referring specifically to probabilities, the corresponding term is probabilistic model. All statistical hypothesis tests and all statistical estimators are derived via statistical models. More generally, statistical models are part of the foundation of statistical inference. A statistical model is usually specified as a mathematical relationship between one or more random variables and other non-random variables. As such, a statistical model is "a formal representation of a theory" (Herman Adèr quoting Kenneth Bollen). ## Introduction Informally, a statistical model can be thought of as a statistical assumption (or set of statistical assumptions) with a certain property: that the assumption allows us to calculate the probability of any event. As an example, consider a pair of ordinary six-sided dice. We will study two different statistical assumptions about the dice. The first statistical assumption is this: for each of the dice, the probability of each face (1, 2, 3, 4, 5, and 6) coming up is .
https://en.wikipedia.org/wiki/Statistical_model
We will study two different statistical assumptions about the dice. The first statistical assumption is this: for each of the dice, the probability of each face (1, 2, 3, 4, 5, and 6) coming up is . From that assumption, we can calculate the probability of both dice coming up 5:  More generally, we can calculate the probability of any event: e.g. (1 and 2) or (3 and 3) or (5 and 6). The alternative statistical assumption is this: for each of the dice, the probability of the face 5 coming up is (because the dice are weighted). From that assumption, we can calculate the probability of both dice coming up 5:  We cannot, however, calculate the probability of any other nontrivial event, as the probabilities of the other faces are unknown. The first statistical assumption constitutes a statistical model: because with the assumption alone, we can calculate the probability of any event. The alternative statistical assumption does not constitute a statistical model: because with the assumption alone, we cannot calculate the probability of every event. In the example above, with the first assumption, calculating the probability of an event is easy. With some other examples, though, the calculation can be difficult, or even impractical (e.g. it might require millions of years of computation).
https://en.wikipedia.org/wiki/Statistical_model
In the example above, with the first assumption, calculating the probability of an event is easy. With some other examples, though, the calculation can be difficult, or even impractical (e.g. it might require millions of years of computation). For an assumption to constitute a statistical model, such difficulty is acceptable: doing the calculation does not need to be practicable, just theoretically possible. ## Formal definition In mathematical terms, a statistical model is a pair ( $$ S, \mathcal{P} $$ ), where $$ S $$ is the set of possible observations, i.e. the sample space, and $$ \mathcal{P} $$ is a set of probability distributions on $$ S $$ . The set $$ \mathcal{P} $$ represents all of the models that are considered possible. This set is typically parameterized: $$ \mathcal{P}=\{F_{\theta} : \theta \in \Theta\} $$ . The set $$ \Theta $$ defines the parameters of the model.
https://en.wikipedia.org/wiki/Statistical_model
This set is typically parameterized: $$ \mathcal{P}=\{F_{\theta} : \theta \in \Theta\} $$ . The set $$ \Theta $$ defines the parameters of the model. If a parameterization is such that distinct parameter values give rise to distinct distributions, i.e. $$ F_{\theta_1} = F_{\theta_2} \Rightarrow \theta_1 = \theta_2 $$ (in other words, the mapping is injective), it is said to be identifiable. In some cases, the model can be more complex. - In Bayesian statistics, the model is extended by adding a probability distribution over the parameter space $$ \Theta $$ . - A statistical model can sometimes distinguish two sets of probability distributions. The first set $$ \mathcal{Q}=\{F_{\theta} : \theta \in \Theta\} $$ is the set of models considered for inference. The second set $$ \mathcal{P}=\{F_{\lambda} : \lambda \in \Lambda\} $$ is the set of models that could have generated the data which is much larger than $$ \mathcal{Q} $$ .
https://en.wikipedia.org/wiki/Statistical_model
The first set $$ \mathcal{Q}=\{F_{\theta} : \theta \in \Theta\} $$ is the set of models considered for inference. The second set $$ \mathcal{P}=\{F_{\lambda} : \lambda \in \Lambda\} $$ is the set of models that could have generated the data which is much larger than $$ \mathcal{Q} $$ . Such statistical models are key in checking that a given procedure is robust, i.e. that it does not produce catastrophic errors when its assumptions about the data are incorrect. ## An example Suppose that we have a population of children, with the ages of the children distributed uniformly, in the population. The height of a child will be stochastically related to the age: e.g. when we know that a child is of age 7, this influences the chance of the child being 1.5 meters tall. We could formalize that relationship in a linear regression model, like this: heighti = b0 + b1agei + εi, where b0 is the intercept, b1 is a parameter that age is multiplied by to obtain a prediction of height, εi is the error term, and i identifies the child. This implies that height is predicted by age, with some error. An admissible model must be consistent with all the data points.
https://en.wikipedia.org/wiki/Statistical_model
This implies that height is predicted by age, with some error. An admissible model must be consistent with all the data points. Thus, a straight line (heighti = b0 + b1agei) cannot be admissible for a model of the data—unless it exactly fits all the data points, i.e. all the data points lie perfectly on the line. The error term, εi, must be included in the equation, so that the model is consistent with all the data points. To do statistical inference, we would first need to assume some probability distributions for the εi. For instance, we might assume that the εi distributions are i.i.d. Gaussian, with zero mean. In this instance, the model would have 3 parameters: b0, b1, and the variance of the Gaussian distribution. We can formally specify the model in the form ( $$ S, \mathcal{P} $$ ) as follows. The sample space, $$ S $$ , of our model comprises the set of all possible pairs (age, height). Each possible value of $$ \theta $$  = (b0, b1, σ2) determines a distribution on $$ S $$ ; denote that distribution by $$ F_{\theta} $$ .
https://en.wikipedia.org/wiki/Statistical_model
The sample space, $$ S $$ , of our model comprises the set of all possible pairs (age, height). Each possible value of $$ \theta $$  = (b0, b1, σ2) determines a distribution on $$ S $$ ; denote that distribution by $$ F_{\theta} $$ . If $$ \Theta $$ is the set of all possible values of $$ \theta $$ , then $$ \mathcal{P}=\{F_{\theta} : \theta \in \Theta\} $$ . (The parameterization is identifiable, and this is easy to check.) In this example, the model is determined by (1) specifying $$ S $$ and (2) making some assumptions relevant to $$ \mathcal{P} $$ . There are two assumptions: that height can be approximated by a linear function of age; that errors in the approximation are distributed as i.i.d. Gaussian. The assumptions are sufficient to specify $$ \mathcal{P} $$ —as they are required to do. ## General remarks A statistical model is a special class of mathematical model. What distinguishes a statistical model from other mathematical models is that a statistical model is non-deterministic.
https://en.wikipedia.org/wiki/Statistical_model
## General remarks A statistical model is a special class of mathematical model. What distinguishes a statistical model from other mathematical models is that a statistical model is non-deterministic. Thus, in a statistical model specified via mathematical equations, some of the variables do not have specific values, but instead have probability distributions; i.e. some of the variables are stochastic. In the above example with children's heights, ε is a stochastic variable; without that stochastic variable, the model would be deterministic. Statistical models are often used even when the data-generating process being modeled is deterministic. For instance, coin tossing is, in principle, a deterministic process; yet it is commonly modeled as stochastic (via a Bernoulli process). Choosing an appropriate statistical model to represent a given data-generating process is sometimes extremely difficult, and may require knowledge of both the process and relevant statistical analyses. Relatedly, the statistician Sir David Cox has said, "How [the] translation from subject-matter problem to statistical model is done is often the most critical part of an analysis". There are three purposes for a statistical model, according to Konishi & Kitagawa: 1. Predictions 1. Extraction of information 1.
https://en.wikipedia.org/wiki/Statistical_model
Predictions 1. Extraction of information 1. Description of stochastic structures Those three purposes are essentially the same as the three purposes indicated by Friendly & Meyer: prediction, estimation, description. ## Dimension of a model Suppose that we have a statistical model ( $$ S, \mathcal{P} $$ ) with $$ \mathcal{P}=\{F_{\theta} : \theta \in \Theta\} $$ . In notation, we write that _ BLOCK2_ where is a positive integer ( $$ \mathbb{R} $$ denotes the real numbers; other sets can be used, in principle). Here, is called the dimension of the model. The model is said to be parametric if $$ \Theta $$ has finite dimension.
https://en.wikipedia.org/wiki/Statistical_model
Here, is called the dimension of the model. The model is said to be parametric if $$ \Theta $$ has finite dimension. As an example, if we assume that data arise from a univariate Gaussian distribution, then we are assuming that $$ \mathcal{P}=\left\{F_{\mu,\sigma }(x) \equiv \frac{1}{\sqrt{2 \pi} \sigma} \exp\left( -\frac{(x-\mu)^2}{2\sigma^2}\right) : \mu \in \mathbb{R}, \sigma > 0 \right\} $$ . In this example, the dimension, , equals 2. As another example, suppose that the data consists of points (, ) that we assume are distributed according to a straight line with i.i.d. Gaussian residuals (with zero mean): this leads to the same statistical model as was used in the example with children's heights. The dimension of the statistical model is 3: the intercept of the line, the slope of the line, and the variance of the distribution of the residuals. (Note the set of all possible lines has dimension 2, even though geometrically, a line has dimension 1.)
https://en.wikipedia.org/wiki/Statistical_model
The dimension of the statistical model is 3: the intercept of the line, the slope of the line, and the variance of the distribution of the residuals. (Note the set of all possible lines has dimension 2, even though geometrically, a line has dimension 1.) Although formally $$ \theta \in \Theta $$ is a single parameter that has dimension , it is sometimes regarded as comprising separate parameters. For example, with the univariate Gaussian distribution, $$ \theta $$ is formally a single parameter with dimension 2, but it is often regarded as comprising 2 separate parameters—the mean and the standard deviation. A statistical model is nonparametric if the parameter set $$ \Theta $$ is infinite dimensional. A statistical model is semiparametric if it has both finite-dimensional and infinite-dimensional parameters. Formally, if is the dimension of $$ \Theta $$ and is the number of samples, both semiparametric and nonparametric models have $$ k \rightarrow \infty $$ as $$ n \rightarrow \infty $$ . If $$ k/n \rightarrow 0 $$ as $$ n \rightarrow \infty $$ , then the model is semiparametric; otherwise, the model is nonparametric. Parametric models are by far the most commonly used statistical models.
https://en.wikipedia.org/wiki/Statistical_model
If $$ k/n \rightarrow 0 $$ as $$ n \rightarrow \infty $$ , then the model is semiparametric; otherwise, the model is nonparametric. Parametric models are by far the most commonly used statistical models. Regarding semiparametric and nonparametric models, Sir David Cox has said, "These typically involve fewer assumptions of structure and distributional form but usually contain strong assumptions about independencies". ## Nested models Two statistical models are nested if the first model can be transformed into the second model by imposing constraints on the parameters of the first model. As an example, the set of all Gaussian distributions has, nested within it, the set of zero-mean Gaussian distributions: we constrain the mean in the set of all Gaussian distributions to get the zero-mean distributions. As a second example, the quadratic model has, nested within it, the linear model —we constrain the parameter to equal 0. In both those examples, the first model has a higher dimension than the second model (for the first example, the zero-mean model has dimension 1). Such is often, but not always, the case. As an example where they have the same dimension, the set of positive-mean Gaussian distributions is nested within the set of all Gaussian distributions; they both have dimension 2.
https://en.wikipedia.org/wiki/Statistical_model
Such is often, but not always, the case. As an example where they have the same dimension, the set of positive-mean Gaussian distributions is nested within the set of all Gaussian distributions; they both have dimension 2. ## Comparing models Comparing statistical models is fundamental for much of statistical inference. state: "The majority of the problems in statistical inference can be considered to be problems related to statistical modeling. They are typically formulated as comparisons of several statistical models." Common criteria for comparing models include the following: R2, Bayes factor, Akaike information criterion, and the likelihood-ratio test together with its generalization, the relative likelihood. Another way of comparing two statistical models is through the notion of deficiency introduced by Lucien Le Cam.
https://en.wikipedia.org/wiki/Statistical_model
In computer science, tree traversal (also known as tree search and walking the tree) is a form of graph traversal and refers to the process of visiting (e.g. retrieving, updating, or deleting) each node in a tree data structure, exactly once. Such traversals are classified by the order in which the nodes are visited. The following algorithms are described for a binary tree, but they may be generalized to other trees as well. ## Types Unlike linked lists, one-dimensional arrays and other linear data structures, which are canonically traversed in linear order, trees may be traversed in multiple ways. They may be traversed in depth-first or breadth-first order. There are three common ways to traverse them in depth-first order: in-order, pre-order and post-order. Beyond these basic traversals, various more complex or hybrid schemes are possible, such as depth-limited searches like iterative deepening depth-first search. The latter, as well as breadth-first search, can also be used to traverse infinite trees, see below. ### Data structures for tree traversal Traversing a tree involves iterating over all nodes in some manner.
https://en.wikipedia.org/wiki/Tree_traversal
The latter, as well as breadth-first search, can also be used to traverse infinite trees, see below. ### Data structures for tree traversal Traversing a tree involves iterating over all nodes in some manner. Because from a given node there is more than one possible next node (it is not a linear data structure), then, assuming sequential computation (not parallel), some nodes must be deferred—stored in some way for later visiting. This is often done via a stack (LIFO) or queue (FIFO). As a tree is a self-referential (recursively defined) data structure, traversal can be defined by recursion or, more subtly, corecursion, in a natural and clear fashion; in these cases the deferred nodes are stored implicitly in the call stack. ### Depth-first search is easily implemented via a stack, including recursively (via the call stack), while breadth-first search is easily implemented via a queue, including corecursively. Depth-first search In depth-first search (DFS), the search tree is deepened as much as possible before going to the next sibling. To traverse binary trees with depth-first search, perform the following operations at each node: 1. If the current node is empty then return.
https://en.wikipedia.org/wiki/Tree_traversal
To traverse binary trees with depth-first search, perform the following operations at each node: 1. If the current node is empty then return. 1. Execute the following three operations in a certain order: 1. : N: Visit the current node. 1. : L: Recursively traverse the current node's left subtree. 1. : R: Recursively traverse the current node's right subtree. The trace of a traversal is called a sequentialisation of the tree. The traversal trace is a list of each visited node. No one sequentialisation according to pre-, in- or post-order describes the underlying tree uniquely. Given a tree with distinct elements, either pre-order or post-order paired with in-order is sufficient to describe the tree uniquely. However, pre-order with post-order leaves some ambiguity in the tree structure. There are three methods at which position of the traversal relative to the node (in the figure: red, green, or blue) the visit of the node shall take place. The choice of exactly one color determines exactly one visit of a node as described below. Visit at all three colors results in a threefold visit of the same node yielding the “all-order” sequentialisation: -------------------------- #### Pre-order, NLR 1.
https://en.wikipedia.org/wiki/Tree_traversal
Visit at all three colors results in a threefold visit of the same node yielding the “all-order” sequentialisation: -------------------------- #### Pre-order, NLR 1. Visit the current node (in the figure: position red). 1. Recursively traverse the current node's left subtree. 1. Recursively traverse the current node's right subtree. The pre-order traversal is a topologically sorted one, because a parent node is processed before any of its child nodes is done. #### Post-order, LRN 1. Recursively traverse the current node's left subtree. 1. Recursively traverse the current node's right subtree. 1. Visit the current node (in the figure: position blue). Post-order traversal can be useful to get postfix expression of a binary expression tree. #### In-order, LNR 1. Recursively traverse the current node's left subtree. 1. Visit the current node (in the figure: position green). 1. Recursively traverse the current node's right subtree.
https://en.wikipedia.org/wiki/Tree_traversal
1. Recursively traverse the current node's right subtree. In a binary search tree ordered such that in each node the key is greater than all keys in its left subtree and less than all keys in its right subtree, in-order traversal retrieves the keys in ascending sorted order. #### Reverse pre-order, NRL 1. Visit the current node. 1. Recursively traverse the current node's right subtree. 1. Recursively traverse the current node's left subtree. #### Reverse post-order, RLN 1. Recursively traverse the current node's right subtree. 1. Recursively traverse the current node's left subtree. 1. Visit the current node. #### Reverse in-order, RNL 1. Recursively traverse the current node's right subtree. 1. Visit the current node. 1. Recursively traverse the current node's left subtree. In a binary search tree ordered such that in each node the key is greater than all keys in its left subtree and less than all keys in its right subtree, reverse in-order traversal retrieves the keys in descending sorted order.
https://en.wikipedia.org/wiki/Tree_traversal
Recursively traverse the current node's left subtree. In a binary search tree ordered such that in each node the key is greater than all keys in its left subtree and less than all keys in its right subtree, reverse in-order traversal retrieves the keys in descending sorted order. #### Arbitrary trees To traverse arbitrary trees (not necessarily binary trees) with depth-first search, perform the following operations at each node: 1. If the current node is empty then return. 1. Visit the current node for pre-order traversal. 1. For each i from 1 to the current node's number of subtrees − 1, or from the latter to the former for reverse traversal, do: 1. Recursively traverse the current node's i-th subtree. 1. Visit the current node for in-order traversal. 1. Recursively traverse the current node's last subtree. 1. Visit the current node for post-order traversal. Depending on the problem at hand, pre-order, post-order, and especially one of the number of subtrees − 1 in-order operations may be optional. Also, in practice more than one of pre-order, post-order, and in-order operations may be required.
https://en.wikipedia.org/wiki/Tree_traversal
Depending on the problem at hand, pre-order, post-order, and especially one of the number of subtrees − 1 in-order operations may be optional. Also, in practice more than one of pre-order, post-order, and in-order operations may be required. For example, when inserting into a ternary tree, a pre-order operation is performed by comparing items. A post-order operation may be needed afterwards to re-balance the tree. ### ### Breadth-first search In breadth-first search (BFS) or level-order search, the search tree is broadened as much as possible before going to the next depth. ### Other types There are also tree traversal algorithms that classify as neither depth-first search nor breadth-first search. One such algorithm is Monte Carlo tree search, which concentrates on analyzing the most promising moves, basing the expansion of the search tree on random sampling of the search space. ## Applications Pre-order traversal can be used to make a prefix expression (Polish notation) from expression trees: traverse the expression tree pre-orderly. For example, traversing the depicted arithmetic expression in pre-order yields "+ * A − B C + D E". In prefix notation, there is no need for any parentheses as long as each operator has a fixed number of operands.
https://en.wikipedia.org/wiki/Tree_traversal
For example, traversing the depicted arithmetic expression in pre-order yields "+ * A − B C + D E". In prefix notation, there is no need for any parentheses as long as each operator has a fixed number of operands. Pre-order traversal is also used to create a copy of the tree. Post-order traversal can generate a postfix representation (Reverse Polish notation) of a binary tree. Traversing the depicted arithmetic expression in post-order yields "A B C − * D E + +"; the latter can easily be transformed into machine code to evaluate the expression by a stack machine. Post-order traversal is also used to delete the tree. Each node is freed after freeing its children. In-order traversal is very commonly used on binary search trees because it returns values from the underlying set in order, according to the comparator that set up the binary search tree. ## Implementations ### Depth-first search implementation Below are examples of stack-based implementation for pre-order, post-order and in-order traversal in recursive approach (left) as well as iterative approach (right). Implementations in iterative approach are able to avoid the drawbacks of recursion, particularly limitations of stack space and performance issues. Several alternative implementations are also mentioned.
https://en.wikipedia.org/wiki/Tree_traversal
Implementations in iterative approach are able to avoid the drawbacks of recursion, particularly limitations of stack space and performance issues. Several alternative implementations are also mentioned. #### Pre-order implementation procedure preorder(node) if node = null return visit(node) preorder(node.left) preorder(node.right) procedure iterativePreorder(node) if node = null return stack ← empty stack stack.push(node) while not stack.isEmpty() node ← stack.pop() visit(node) // right child is pushed first so that left is processed first if node.right ≠ null stack.push(node.right) if node.left ≠ null stack.push(node.left)
https://en.wikipedia.org/wiki/Tree_traversal
Several alternative implementations are also mentioned. #### Pre-order implementation procedure preorder(node) if node = null return visit(node) preorder(node.left) preorder(node.right) procedure iterativePreorder(node) if node = null return stack ← empty stack stack.push(node) while not stack.isEmpty() node ← stack.pop() visit(node) // right child is pushed first so that left is processed first if node.right ≠ null stack.push(node.right) if node.left ≠ null stack.push(node.left) #### Post-order implementation procedure postorder(node) if node = null return postorder(node.left) postorder(node.right) visit(node) procedure iterativePostorder(node) if node = null return stack ← empty stack lastNodeVisited ← null while not stack.isEmpty() or node ≠ null if node ≠ null stack.push(node) node ← node.left else peekNode ← stack.peek() // if right child exists and traversing node // from left child, then move right if peekNode.right ≠ null and lastNodeVisited ≠ peekNode.right node ← peekNode.right else visit(peekNode) lastNodeVisited ← stack.pop()
https://en.wikipedia.org/wiki/Tree_traversal
#### Pre-order implementation procedure preorder(node) if node = null return visit(node) preorder(node.left) preorder(node.right) procedure iterativePreorder(node) if node = null return stack ← empty stack stack.push(node) while not stack.isEmpty() node ← stack.pop() visit(node) // right child is pushed first so that left is processed first if node.right ≠ null stack.push(node.right) if node.left ≠ null stack.push(node.left) #### Post-order implementation procedure postorder(node) if node = null return postorder(node.left) postorder(node.right) visit(node) procedure iterativePostorder(node) if node = null return stack ← empty stack lastNodeVisited ← null while not stack.isEmpty() or node ≠ null if node ≠ null stack.push(node) node ← node.left else peekNode ← stack.peek() // if right child exists and traversing node // from left child, then move right if peekNode.right ≠ null and lastNodeVisited ≠ peekNode.right node ← peekNode.right else visit(peekNode) lastNodeVisited ← stack.pop() #### In-order implementation procedure inorder(node) if node = null return inorder(node.left) visit(node) inorder(node.right) procedure iterativeInorder(node) if node = null return stack ← empty stack while not stack.isEmpty() or node ≠ null if node ≠ null stack.push(node) node ← node.left else node ← stack.pop() visit(node) node ← node.right
https://en.wikipedia.org/wiki/Tree_traversal
#### Post-order implementation procedure postorder(node) if node = null return postorder(node.left) postorder(node.right) visit(node) procedure iterativePostorder(node) if node = null return stack ← empty stack lastNodeVisited ← null while not stack.isEmpty() or node ≠ null if node ≠ null stack.push(node) node ← node.left else peekNode ← stack.peek() // if right child exists and traversing node // from left child, then move right if peekNode.right ≠ null and lastNodeVisited ≠ peekNode.right node ← peekNode.right else visit(peekNode) lastNodeVisited ← stack.pop() #### In-order implementation procedure inorder(node) if node = null return inorder(node.left) visit(node) inorder(node.right) procedure iterativeInorder(node) if node = null return stack ← empty stack while not stack.isEmpty() or node ≠ null if node ≠ null stack.push(node) node ← node.left else node ← stack.pop() visit(node) node ← node.right #### Another variant of pre-order If the tree is represented by an array (first index is 0), it is possible to calculate the index of the next element: procedure bubbleUp(array, i, leaf) k ← 1 i ← (i - 1)/2 while (leaf + 1) % (k * 2) ≠ k i ← (i - 1)/2 k ← 2 * k return i procedure preorder(array) i ← 0 while i ≠ array.size visit(array[i]) if i = size - 1 i ← size else if i < size/2 i ← i * 2 + 1 else leaf ← i - size/2 parent ← bubble_up(array, i, leaf) i ← parent * 2 + 2
https://en.wikipedia.org/wiki/Tree_traversal
#### In-order implementation procedure inorder(node) if node = null return inorder(node.left) visit(node) inorder(node.right) procedure iterativeInorder(node) if node = null return stack ← empty stack while not stack.isEmpty() or node ≠ null if node ≠ null stack.push(node) node ← node.left else node ← stack.pop() visit(node) node ← node.right #### Another variant of pre-order If the tree is represented by an array (first index is 0), it is possible to calculate the index of the next element: procedure bubbleUp(array, i, leaf) k ← 1 i ← (i - 1)/2 while (leaf + 1) % (k * 2) ≠ k i ← (i - 1)/2 k ← 2 * k return i procedure preorder(array) i ← 0 while i ≠ array.size visit(array[i]) if i = size - 1 i ← size else if i < size/2 i ← i * 2 + 1 else leaf ← i - size/2 parent ← bubble_up(array, i, leaf) i ← parent * 2 + 2 #### Advancing to the next or previous node The `node` to be started with may have been found in the binary search tree `bst` by means of a standard search function, which is shown here in an implementation without parent pointers, i.e. it uses a `stack` for holding the ancestor pointers.
https://en.wikipedia.org/wiki/Tree_traversal
#### Another variant of pre-order If the tree is represented by an array (first index is 0), it is possible to calculate the index of the next element: procedure bubbleUp(array, i, leaf) k ← 1 i ← (i - 1)/2 while (leaf + 1) % (k * 2) ≠ k i ← (i - 1)/2 k ← 2 * k return i procedure preorder(array) i ← 0 while i ≠ array.size visit(array[i]) if i = size - 1 i ← size else if i < size/2 i ← i * 2 + 1 else leaf ← i - size/2 parent ← bubble_up(array, i, leaf) i ← parent * 2 + 2 #### Advancing to the next or previous node The `node` to be started with may have been found in the binary search tree `bst` by means of a standard search function, which is shown here in an implementation without parent pointers, i.e. it uses a `stack` for holding the ancestor pointers. procedure search(bst, key) // returns a (node, stack) node ← bst.root stack ← empty stack while node ≠ null stack.push(node) if key = node.key return (node, stack) if key < node.key node ← node.left else node ← node.right return (null, empty stack) The function inorderNext returns an in-order-neighbor of `node`, either the (for `dir=1`) or the (for `dir=0`), and the updated `stack`, so that the binary search tree may be sequentially in-order-traversed and searched in the given direction `dir` further on.
https://en.wikipedia.org/wiki/Tree_traversal
#### Advancing to the next or previous node The `node` to be started with may have been found in the binary search tree `bst` by means of a standard search function, which is shown here in an implementation without parent pointers, i.e. it uses a `stack` for holding the ancestor pointers. procedure search(bst, key) // returns a (node, stack) node ← bst.root stack ← empty stack while node ≠ null stack.push(node) if key = node.key return (node, stack) if key < node.key node ← node.left else node ← node.right return (null, empty stack) The function inorderNext returns an in-order-neighbor of `node`, either the (for `dir=1`) or the (for `dir=0`), and the updated `stack`, so that the binary search tree may be sequentially in-order-traversed and searched in the given direction `dir` further on. procedure inorderNext(node, dir, stack) newnode ← node.child[dir] if newnode ≠ null do node ← newnode stack.push(node) newnode ← node.child[1-dir] until newnode = null return (node, stack) // node does not have a dir-child: do if stack.isEmpty() return (null, empty stack) oldnode ← node node ← stack.pop() // parent of oldnode until oldnode ≠ node.child[dir] // now oldnode = node.child[1-dir], // i.e. node = ancestor (and predecessor/successor) of original node return (node, stack) Note that the function does not use keys, which means that the sequential structure is completely recorded by the binary search tree’s edges.
https://en.wikipedia.org/wiki/Tree_traversal
procedure search(bst, key) // returns a (node, stack) node ← bst.root stack ← empty stack while node ≠ null stack.push(node) if key = node.key return (node, stack) if key < node.key node ← node.left else node ← node.right return (null, empty stack) The function inorderNext returns an in-order-neighbor of `node`, either the (for `dir=1`) or the (for `dir=0`), and the updated `stack`, so that the binary search tree may be sequentially in-order-traversed and searched in the given direction `dir` further on. procedure inorderNext(node, dir, stack) newnode ← node.child[dir] if newnode ≠ null do node ← newnode stack.push(node) newnode ← node.child[1-dir] until newnode = null return (node, stack) // node does not have a dir-child: do if stack.isEmpty() return (null, empty stack) oldnode ← node node ← stack.pop() // parent of oldnode until oldnode ≠ node.child[dir] // now oldnode = node.child[1-dir], // i.e. node = ancestor (and predecessor/successor) of original node return (node, stack) Note that the function does not use keys, which means that the sequential structure is completely recorded by the binary search tree’s edges. For traversals without change of direction, the (amortised) average complexity is $$ \mathcal{O}(1) , $$ because a full traversal takes $$ 2 n-2 $$ steps for a BST of size $$ n , $$ 1 step for edge up and 1 for edge down.
https://en.wikipedia.org/wiki/Tree_traversal
procedure inorderNext(node, dir, stack) newnode ← node.child[dir] if newnode ≠ null do node ← newnode stack.push(node) newnode ← node.child[1-dir] until newnode = null return (node, stack) // node does not have a dir-child: do if stack.isEmpty() return (null, empty stack) oldnode ← node node ← stack.pop() // parent of oldnode until oldnode ≠ node.child[dir] // now oldnode = node.child[1-dir], // i.e. node = ancestor (and predecessor/successor) of original node return (node, stack) Note that the function does not use keys, which means that the sequential structure is completely recorded by the binary search tree’s edges. For traversals without change of direction, the (amortised) average complexity is $$ \mathcal{O}(1) , $$ because a full traversal takes $$ 2 n-2 $$ steps for a BST of size $$ n , $$ 1 step for edge up and 1 for edge down. The worst-case complexity is $$ \mathcal{O}(h) $$ with $$ h $$ as the height of the tree.
https://en.wikipedia.org/wiki/Tree_traversal
For traversals without change of direction, the (amortised) average complexity is $$ \mathcal{O}(1) , $$ because a full traversal takes $$ 2 n-2 $$ steps for a BST of size $$ n , $$ 1 step for edge up and 1 for edge down. The worst-case complexity is $$ \mathcal{O}(h) $$ with $$ h $$ as the height of the tree. All the above implementations require stack space proportional to the height of the tree which is a call stack for the recursive and a parent (ancestor) stack for the iterative ones. In a poorly balanced tree, this can be considerable. With the iterative implementations we can remove the stack requirement by maintaining parent pointers in each node, or by threading the tree (next section). #### Morris in-order traversal using threading A binary tree is threaded by making every left child pointer (that would otherwise be null) point to the in-order predecessor of the node (if it exists) and every right child pointer (that would otherwise be null) point to the in-order successor of the node (if it exists). Advantages: 1. Avoids recursion, which uses a call stack and consumes memory and time. 1. The node keeps a record of its parent. Disadvantages: 1. The tree is more complex.
https://en.wikipedia.org/wiki/Tree_traversal
Disadvantages: 1. The tree is more complex. 1. We can make only one traversal at a time. 1. It is more prone to errors when both the children are not present and both values of nodes point to their ancestors. Morris traversal is an implementation of in-order traversal that uses threading: 1. Create links to the in-order successor. 1. Print the data using these links. 1. Revert the changes to restore original tree. Breadth-first search Also, listed below is pseudocode for a simple queue based level-order traversal, and will require space proportional to the maximum number of nodes at a given depth. This can be as much as half the total number of nodes. A more space-efficient approach for this type of traversal can be implemented using an iterative deepening depth-first search. procedure levelorder(node) queue ← empty queue queue.enqueue(node) while not queue.isEmpty() node ← queue.dequeue() visit(node) if node.left ≠ null queue.enqueue(node.left) if node.right ≠ null queue.enqueue(node.right) If
https://en.wikipedia.org/wiki/Tree_traversal
A more space-efficient approach for this type of traversal can be implemented using an iterative deepening depth-first search. procedure levelorder(node) queue ← empty queue queue.enqueue(node) while not queue.isEmpty() node ← queue.dequeue() visit(node) if node.left ≠ null queue.enqueue(node.left) if node.right ≠ null queue.enqueue(node.right) If the tree is represented by an array (first index is 0), it is sufficient iterating through all elements: procedure levelorder(array) for i from 0 to array.size visit(array[i]) ## Infinite trees While traversal is usually done for trees with a finite number of nodes (and hence finite depth and finite branching factor) it can also be done for infinite trees. This is of particular interest in functional programming (particularly with lazy evaluation), as infinite data structures can often be easily defined and worked with, though they are not (strictly) evaluated, as this would take infinite time. Some finite trees are too large to represent explicitly, such as the game tree for chess or go, and so it is useful to analyze them as if they were infinite. A basic requirement for traversal is to visit every node eventually.
https://en.wikipedia.org/wiki/Tree_traversal
Some finite trees are too large to represent explicitly, such as the game tree for chess or go, and so it is useful to analyze them as if they were infinite. A basic requirement for traversal is to visit every node eventually. For infinite trees, simple algorithms often fail this. For example, given a binary tree of infinite depth, a depth-first search will go down one side (by convention the left side) of the tree, never visiting the rest, and indeed an in-order or post-order traversal will never visit any nodes, as it has not reached a leaf (and in fact never will). By contrast, a breadth-first (level-order) traversal will traverse a binary tree of infinite depth without problem, and indeed will traverse any tree with bounded branching factor. On the other hand, given a tree of depth 2, where the root has infinitely many children, and each of these children has two children, a depth-first search will visit all nodes, as once it exhausts the grandchildren (children of children of one node), it will move on to the next (assuming it is not post-order, in which case it never reaches the root). By contrast, a breadth-first search will never reach the grandchildren, as it seeks to exhaust the children first.
https://en.wikipedia.org/wiki/Tree_traversal
On the other hand, given a tree of depth 2, where the root has infinitely many children, and each of these children has two children, a depth-first search will visit all nodes, as once it exhausts the grandchildren (children of children of one node), it will move on to the next (assuming it is not post-order, in which case it never reaches the root). By contrast, a breadth-first search will never reach the grandchildren, as it seeks to exhaust the children first. A more sophisticated analysis of running time can be given via infinite ordinal numbers; for example, the breadth-first search of the depth 2 tree above will take ω·2 steps: ω for the first level, and then another ω for the second level. Thus, simple depth-first or breadth-first searches do not traverse every infinite tree, and are not efficient on very large trees. However, hybrid methods can traverse any (countably) infinite tree, essentially via a diagonal argument ("diagonal"—a combination of vertical and horizontal—corresponds to a combination of depth and breadth). Concretely, given the infinitely branching tree of infinite depth, label the root (), the children of the root (1), (2), ..., the grandchildren (1, 1), (1, 2), ..., (2, 1), (2, 2), ..., and so on.
https://en.wikipedia.org/wiki/Tree_traversal
However, hybrid methods can traverse any (countably) infinite tree, essentially via a diagonal argument ("diagonal"—a combination of vertical and horizontal—corresponds to a combination of depth and breadth). Concretely, given the infinitely branching tree of infinite depth, label the root (), the children of the root (1), (2), ..., the grandchildren (1, 1), (1, 2), ..., (2, 1), (2, 2), ..., and so on. The nodes are thus in a one-to-one correspondence with finite (possibly empty) sequences of positive numbers, which are countable and can be placed in order first by sum of entries, and then by lexicographic order within a given sum (only finitely many sequences sum to a given value, so all entries are reached—formally there are a finite number of compositions of a given natural number, specifically 2n−1 compositions of ), which gives a traversal. Explicitly: 1. () 1. (1) 1. (1, 1) (2) 1. (1, 1, 1) (1, 2) (2, 1) (3) 1. (1, 1, 1, 1) (1, 1, 2) (1, 2, 1) (1, 3) (2, 1, 1) (2, 2) (3, 1) (4) etc.
https://en.wikipedia.org/wiki/Tree_traversal
The nodes are thus in a one-to-one correspondence with finite (possibly empty) sequences of positive numbers, which are countable and can be placed in order first by sum of entries, and then by lexicographic order within a given sum (only finitely many sequences sum to a given value, so all entries are reached—formally there are a finite number of compositions of a given natural number, specifically 2n−1 compositions of ), which gives a traversal. Explicitly: 1. () 1. (1) 1. (1, 1) (2) 1. (1, 1, 1) (1, 2) (2, 1) (3) 1. (1, 1, 1, 1) (1, 1, 2) (1, 2, 1) (1, 3) (2, 1, 1) (2, 2) (3, 1) (4) etc. This can be interpreted as mapping the infinite depth binary tree onto this tree and then applying breadth-first search: replace the "down" edges connecting a parent node to its second and later children with "right" edges from the first child to the second child, from the second child to the third child, etc.
https://en.wikipedia.org/wiki/Tree_traversal
Explicitly: 1. () 1. (1) 1. (1, 1) (2) 1. (1, 1, 1) (1, 2) (2, 1) (3) 1. (1, 1, 1, 1) (1, 1, 2) (1, 2, 1) (1, 3) (2, 1, 1) (2, 2) (3, 1) (4) etc. This can be interpreted as mapping the infinite depth binary tree onto this tree and then applying breadth-first search: replace the "down" edges connecting a parent node to its second and later children with "right" edges from the first child to the second child, from the second child to the third child, etc. Thus at each step one can either go down (append a (, 1) to the end) or go right (add one to the last number) (except the root, which is extra and can only go down), which shows the correspondence between the infinite binary tree and the above numbering; the sum of the entries (minus one) corresponds to the distance from the root, which agrees with the 2n−1 nodes at depth in the infinite binary tree (2 corresponds to binary). ## References ## Sources - Dale, Nell. Lilly, Susan D. "Pascal Plus Data Structures". D. C. Heath and Company. Lexington, MA. 1995. Fourth Edition. - Drozdek, Adam. "Data Structures and Algorithms in C++". Brook/Cole.
https://en.wikipedia.org/wiki/Tree_traversal