text
stringlengths 16
172k
| source
stringlengths 32
122
|
---|---|
Instatistics, theBonferroni correctionis a method to counteract themultiple comparisons problem.
The method is named for its use of theBonferroni inequalities.[1]Application of the method toconfidence intervalswas described byOlive Jean Dunn.[2]
Statistical hypothesis testingis based on rejecting thenull hypothesiswhen the likelihood of the observed data would be low if the null hypothesis were true. If multiple hypotheses are tested, the probability of observing a rare event increases, and therefore, the likelihood of incorrectly rejecting a null hypothesis (i.e., making aType I error) increases.[3]
The Bonferroni correction compensates for that increase by testing each individual hypothesis at a significance level ofα/m{\displaystyle \alpha /m}, whereα{\displaystyle \alpha }is the desired overall alpha level andm{\displaystyle m}is the number of hypotheses.[4]For example, if a trial is testingm=20{\displaystyle m=20}hypotheses with a desired overallα=0.05{\displaystyle \alpha =0.05}, then the Bonferroni correction would test each individual hypothesis atα=0.05/20=0.0025{\displaystyle \alpha =0.05/20=0.0025}.
The Bonferroni correction can also be applied as a p-value adjustment: Using that approach, instead of adjusting the alpha level, each p-value is multiplied by the number of tests (with adjusted p-values that exceed 1 then being reduced to 1), and the alpha level is left unchanged. The significance decisions using this approach will be the same as when using the alpha-level adjustment approach.
LetH1,…,Hm{\displaystyle H_{1},\ldots ,H_{m}}be a family of null hypotheses and letp1,…,pm{\displaystyle p_{1},\ldots ,p_{m}}be their correspondingp-values. Letm{\displaystyle m}be the total number of null hypotheses, and letm0{\displaystyle m_{0}}be the number of true null hypotheses (which is presumably unknown to the researcher). Thefamily-wise error rate(FWER) is the probability of rejecting at least one trueHi{\displaystyle H_{i}}, that is, of making at least onetype I error. The Bonferroni correction rejects the null hypothesis for eachpi≤αm{\displaystyle p_{i}\leq {\frac {\alpha }{m}}}, thereby controlling theFWERat≤α{\displaystyle \leq \alpha }. Proof of this control follows fromBoole's inequality, as follows:
This control does not require any assumptions about dependence among the p-values or about how many of the null hypotheses are true.[5]
Rather than testing each hypothesis at theα/m{\displaystyle \alpha /m}level, the hypotheses may be tested at any other combination of levels that add up toα{\displaystyle \alpha }, provided that the level of each test is decided before looking at the data.[6]For example, for two hypothesis tests, an overallα{\displaystyle \alpha }of 0.05 could be maintained by conducting one test at 0.04 and the other at 0.01.
The procedure proposed by Dunn[2]can be used to adjustconfidence intervals. If one establishesm{\displaystyle m}confidence intervals, and wishes to have an overall confidence level of1−α{\displaystyle 1-\alpha }, each individual confidence interval can be adjusted to the level of1−αm{\displaystyle 1-{\frac {\alpha }{m}}}.[2]
When searching for a signal in a continuous parameter space there can also be a problem of multiple comparisons, or look-elsewhere effect. For example, a physicist might be looking to discover a particle of unknown mass by considering a large range of masses; this was the case during the Nobel Prize winning detection of theHiggs boson. In such cases, one can apply a continuous generalization of the Bonferroni correction by employingBayesianlogic to relate the effective number of trials,m{\displaystyle m}, to the prior-to-posterior volume ratio.[7]
There are alternative ways to control thefamily-wise error rate. For example, theHolm–Bonferroni methodand theŠidák correctionare universally more powerful procedures than the Bonferroni correction, meaning that they are always at least as powerful. But unlike the Bonferroni procedure, these methods do not control theexpected numberof Type I errors per family (the per-family Type I error rate).[8]
With respect toFamily-wise error rate (FWER)control, the Bonferroni correction can be conservative if there are a large number of tests and/or the test statistics are positively correlated.[9]
Multiple-testing corrections, including the Bonferroni procedure, increase the probability ofType II errorswhen null hypotheses are false, i.e., they reducestatistical power.[10][9]
|
https://en.wikipedia.org/wiki/Bonferroni_correction
|
Inprobability theoryandstatistics, theχ2{\displaystyle \chi ^{2}}-distributionwithk{\displaystyle k}degrees of freedomis the distribution of a sum of the squares ofk{\displaystyle k}independentstandard normalrandom variables.[2]
The chi-squared distributionχk2{\displaystyle \chi _{k}^{2}}is a special case of thegamma distributionand the univariateWishart distribution. Specifically ifX∼χk2{\displaystyle X\sim \chi _{k}^{2}}thenX∼Gamma(α=k2,θ=2){\displaystyle X\sim {\text{Gamma}}(\alpha ={\frac {k}{2}},\theta =2)}(whereα{\displaystyle \alpha }is the shape parameter andθ{\displaystyle \theta }the scale parameter of the gamma distribution) andX∼W1(1,k){\displaystyle X\sim {\text{W}}_{1}(1,k)}.
Thescaled chi-squared distributions2χk2{\displaystyle s^{2}\chi _{k}^{2}}is a reparametrization of thegamma distributionand the univariateWishart distribution. Specifically ifX∼s2χk2{\displaystyle X\sim s^{2}\chi _{k}^{2}}thenX∼Gamma(α=k2,θ=2s2){\displaystyle X\sim {\text{Gamma}}(\alpha ={\frac {k}{2}},\theta =2s^{2})}andX∼W1(s2,k){\displaystyle X\sim {\text{W}}_{1}(s^{2},k)}.
The chi-squared distribution is one of the most widely usedprobability distributionsininferential statistics, notably inhypothesis testingand in construction ofconfidence intervals.[3][4][5][6]This distribution is sometimes called thecentral chi-squared distribution, a special case of the more generalnoncentral chi-squared distribution.[7]
The chi-squared distribution is used in the commonchi-squared testsforgoodness of fitof an observed distribution to a theoretical one, theindependenceof two criteria of classification ofqualitative data, and in finding the confidence interval for estimating the populationstandard deviationof a normal distribution from a sample standard deviation. Many other statistical tests also use this distribution, such asFriedman's analysis of variance by ranks.
IfZ1, ...,Zkareindependent,standard normalrandom variables, then the sum of their squares,
is distributed according to the chi-squared distribution withkdegrees of freedom. This is usually denoted as
The chi-squared distribution has one parameter: a positive integerkthat specifies the number ofdegrees of freedom(the number of random variables being summed,Zis).
The chi-squared distribution is used primarily in hypothesis testing, and to a lesser extent for confidence intervals for population variance when the underlying distribution is normal. Unlike more widely known distributions such as thenormal distributionand theexponential distribution, the chi-squared distribution is not as often applied in the direct modeling of natural phenomena. It arises in the following hypothesis tests, among others:
It is also a component of the definition of thet-distributionand theF-distributionused int-tests, analysis of variance, and regression analysis.
The primary reason for which the chi-squared distribution is extensively used in hypothesis testing is its relationship to the normal distribution. Many hypothesis tests use a test statistic, such as thet-statisticin at-test. For these hypothesis tests, as the sample size,n, increases, thesampling distributionof the test statistic approaches the normal distribution (central limit theorem). Because the test statistic (such ast) is asymptotically normally distributed, provided the sample size is sufficiently large, the distribution used for hypothesis testing may be approximated by a normal distribution. Testing hypotheses using a normal distribution is well understood and relatively easy. The simplest chi-squared distribution is the square of a standard normal distribution. So wherever a normal distribution could be used for a hypothesis test, a chi-squared distribution could be used.
Suppose thatZ{\displaystyle Z}is a random variable sampled from the standard normal distribution, where the mean is0{\displaystyle 0}and the variance is1{\displaystyle 1}:Z∼N(0,1){\displaystyle Z\sim N(0,1)}. Now, consider the random variableX=Z2{\displaystyle X=Z^{2}}. The distribution of the random variableX{\displaystyle X}is an example of a chi-squared distribution:X∼χ12{\displaystyle \ X\ \sim \ \chi _{1}^{2}}. The subscript 1 indicates that this particular chi-squared distribution is constructed from only 1 standard normal distribution. A chi-squared distribution constructed by squaring a single standard normal distribution is said to have 1 degree of freedom. Thus, as the sample size for a hypothesis test increases, the distribution of the test statistic approaches a normal distribution. Just as extreme values of the normal distribution have low probability (and give small p-values), extreme values of the chi-squared distribution have low probability.
An additional reason that the chi-squared distribution is widely used is that it turns up as the large sample distribution of generalizedlikelihood ratio tests(LRT).[8]LRTs have several desirable properties; in particular, simple LRTs commonly provide the highest power to reject the null hypothesis (Neyman–Pearson lemma) and this leads also to optimality properties of generalised LRTs. However, the normal and chi-squared approximations are only valid asymptotically. For this reason, it is preferable to use thetdistribution rather than the normal approximation or the chi-squared approximation for a small sample size. Similarly, in analyses of contingency tables, the chi-squared approximation will be poor for a small sample size, and it is preferable to useFisher's exact test. Ramsey shows that the exactbinomial testis always more powerful than the normal approximation.[9]
Lancaster shows the connections among the binomial, normal, and chi-squared distributions, as follows.[10]De Moivre and Laplace established that a binomial distribution could be approximated by a normal distribution. Specifically they showed the asymptotic normality of the random variable
wherem{\displaystyle m}is the observed number of successes inN{\displaystyle N}trials, where the probability of success isp{\displaystyle p}, andq=1−p{\displaystyle q=1-p}.
Squaring both sides of the equation gives
UsingN=Np+N(1−p){\displaystyle N=Np+N(1-p)},N=m+(N−m){\displaystyle N=m+(N-m)}, andq=1−p{\displaystyle q=1-p}, this equation can be rewritten as
The expression on the right is of the form thatKarl Pearsonwould generalize to the form
where
χ2{\displaystyle \chi ^{2}}= Pearson's cumulative test statistic, which asymptotically approaches aχ2{\displaystyle \chi ^{2}}distribution;Oi{\displaystyle O_{i}}= the number of observations of typei{\displaystyle i};Ei=Npi{\displaystyle E_{i}=Np_{i}}= the expected (theoretical) frequency of typei{\displaystyle i}, asserted by the null hypothesis that the fraction of typei{\displaystyle i}in the population ispi{\displaystyle p_{i}}; andn{\displaystyle n}= the number of cells in the table.[citation needed]
In the case of a binomial outcome (flipping a coin), the binomial distribution may be approximated by a normal distribution (for sufficiently largen{\displaystyle n}). Because the square of a standard normal distribution is the chi-squared distribution with one degree of freedom, the probability of a result such as 1 heads in 10 trials can be approximated either by using the normal distribution directly, or the chi-squared distribution for the normalised, squared difference between observed and expected value. However, many problems involve more than the two possible outcomes of a binomial, and instead require 3 or more categories, which leads to the multinomial distribution. Just as de Moivre and Laplace sought for and found the normal approximation to the binomial, Pearson sought for and found a degenerate multivariate normal approximation to the multinomial distribution (the numbers in each category add up to the total sample size, which is considered fixed). Pearson showed that the chi-squared distribution arose from such a multivariate normal approximation to the multinomial distribution, taking careful account of the statistical dependence (negative correlations) between numbers of observations in different categories.[10]
Theprobability density function(pdf) of the chi-squared distribution is
whereΓ(k/2){\textstyle \Gamma (k/2)}denotes thegamma function, which hasclosed-form values for integerk{\displaystyle k}.
For derivations of the pdf in the cases of one, two andk{\displaystyle k}degrees of freedom, seeProofs related to chi-squared distribution.
Itscumulative distribution functionis:
whereγ(s,t){\displaystyle \gamma (s,t)}is thelower incomplete gamma functionandP(s,t){\textstyle P(s,t)}is theregularized gamma function.
In a special case ofk=2{\displaystyle k=2}this function has the simple form:
which can be easily derived by integratingf(x;2)=12e−x/2{\displaystyle f(x;\,2)={\frac {1}{2}}e^{-x/2}}directly. The integer recurrence of the gamma function makes it easy to computeF(x;k){\displaystyle F(x;\,k)}for other small, evenk{\displaystyle k}.
Tables of the chi-squared cumulative distribution function are widely available and the function is included in manyspreadsheetsand allstatistical packages.
Lettingz≡x/k{\displaystyle z\equiv x/k},Chernoff boundson the lower and upper tails of the CDF may be obtained.[11]For the cases when0<z<1{\displaystyle 0<z<1}(which include all of the cases when this CDF is less than half):F(zk;k)≤(ze1−z)k/2.{\displaystyle F(zk;\,k)\leq (ze^{1-z})^{k/2}.}
The tail bound for the cases whenz>1{\displaystyle z>1}, similarly, is
For anotherapproximationfor the CDF modeled after the cube of a Gaussian, see underNoncentral chi-squared distribution.
The following is a special case of Cochran's theorem.
Theorem.IfZ1,...,Zn{\displaystyle Z_{1},...,Z_{n}}areindependentidentically distributed (i.i.d.),standard normalrandom variables, then∑t=1n(Zt−Z¯)2∼χn−12{\displaystyle \sum _{t=1}^{n}(Z_{t}-{\bar {Z}})^{2}\sim \chi _{n-1}^{2}}whereZ¯=1n∑t=1nZt.{\displaystyle {\bar {Z}}={\frac {1}{n}}\sum _{t=1}^{n}Z_{t}.}
Proof.LetZ∼N(0¯,11){\displaystyle Z\sim {\mathcal {N}}({\bar {0}},1\!\!1)}be a vector ofn{\displaystyle n}independent normally distributed random variables,
andZ¯{\displaystyle {\bar {Z}}}their average.
Then∑t=1n(Zt−Z¯)2=∑t=1nZt2−nZ¯2=Z⊤[11−1n1¯1¯⊤]Z=:Z⊤MZ{\displaystyle \sum _{t=1}^{n}(Z_{t}-{\bar {Z}})^{2}~=~\sum _{t=1}^{n}Z_{t}^{2}-n{\bar {Z}}^{2}~=~Z^{\top }[1\!\!1-{\textstyle {\frac {1}{n}}}{\bar {1}}{\bar {1}}^{\top }]Z~=:~Z^{\top }\!MZ}where11{\displaystyle 1\!\!1}is the identity matrix and1¯{\displaystyle {\bar {1}}}the all ones vector.M{\displaystyle M}has one eigenvectorb1:=1n1¯{\displaystyle b_{1}:={\textstyle {\frac {1}{\sqrt {n}}}}{\bar {1}}}with eigenvalue0{\displaystyle 0},
andn−1{\displaystyle n-1}eigenvectorsb2,...,bn{\displaystyle b_{2},...,b_{n}}(all orthogonal tob1{\displaystyle b_{1}}) with eigenvalue1{\displaystyle 1},
which can be chosen so thatQ:=(b1,...,bn){\displaystyle Q:=(b_{1},...,b_{n})}is an orthogonal matrix.
Since alsoX:=Q⊤Z∼N(0¯,Q⊤11Q)=N(0¯,11){\displaystyle X:=Q^{\top }\!Z\sim {\mathcal {N}}({\bar {0}},Q^{\top }\!1\!\!1Q)={\mathcal {N}}({\bar {0}},1\!\!1)},
we have∑t=1n(Zt−Z¯)2=Z⊤MZ=X⊤Q⊤MQX=X22+...+Xn2∼χn−12,{\displaystyle \sum _{t=1}^{n}(Z_{t}-{\bar {Z}})^{2}~=~Z^{\top }\!MZ~=~X^{\top }\!Q^{\top }\!MQX~=~X_{2}^{2}+...+X_{n}^{2}~\sim ~\chi _{n-1}^{2},}which proves the claim.
It follows from the definition of the chi-squared distribution that the sum of independent chi-squared variables is also chi-squared distributed. Specifically, ifXi,i=1,n¯{\displaystyle X_{i},i={\overline {1,n}}}are independent chi-squared variables withki{\displaystyle k_{i}},i=1,n¯{\displaystyle i={\overline {1,n}}}degrees of freedom, respectively, thenY=X1+⋯+Xn{\displaystyle Y=X_{1}+\cdots +X_{n}}is chi-squared distributed withk1+⋯+kn{\displaystyle k_{1}+\cdots +k_{n}}degrees of freedom.
The sample mean ofn{\displaystyle n}i.i.d.chi-squared variables of degreek{\displaystyle k}is distributed according to a gamma distribution with shapeα{\displaystyle \alpha }and scaleθ{\displaystyle \theta }parameters:
Asymptotically, given that for a shape parameterα{\displaystyle \alpha }going to infinity, a Gamma distribution converges towards a normal distribution with expectationμ=α⋅θ{\displaystyle \mu =\alpha \cdot \theta }and varianceσ2=αθ2{\displaystyle \sigma ^{2}=\alpha \,\theta ^{2}}, the sample mean converges towards:
X¯→n→∞N(μ=k,σ2=2k/n){\displaystyle {\overline {X}}\xrightarrow {n\to \infty } N(\mu =k,\sigma ^{2}=2\,k/n)}
Note that we would have obtained the same result invoking instead thecentral limit theorem, noting that for each chi-squared variable of degreek{\displaystyle k}the expectation isk{\displaystyle k}, and its variance2k{\displaystyle 2\,k}(and hence the variance of the sample meanX¯{\displaystyle {\overline {X}}}beingσ2=2kn{\displaystyle \sigma ^{2}={\frac {2k}{n}}}).
Thedifferential entropyis given by
whereψ(x){\displaystyle \psi (x)}is theDigamma function.
The chi-squared distribution is themaximum entropy probability distributionfor a random variateX{\displaystyle X}for whichE(X)=k{\displaystyle \operatorname {E} (X)=k}andE(ln(X))=ψ(k/2)+ln(2){\displaystyle \operatorname {E} (\ln(X))=\psi (k/2)+\ln(2)}are fixed. Since the chi-squared is in the family of gamma distributions, this can be derived by substituting appropriate values in theExpectation of the log moment of gamma. For derivation from more basic principles, see the derivation inmoment-generating function of the sufficient statistic.
The noncentral moments (raw moments) of a chi-squared distribution withk{\displaystyle k}degrees of freedom are given by[12][13]
Thecumulantsare readily obtained by apower seriesexpansion of the logarithm of the characteristic function:
withcumulant generating functionlnE[etX]=−k2ln(1−2t){\displaystyle \ln E[e^{tX}]=-{\frac {k}{2}}\ln(1-2t)}.
The chi-squared distribution exhibits strong concentration around its mean. The standard Laurent-Massart[14]bounds are:
One consequence is that, ifZ∼N(0,1)k{\displaystyle Z\sim N(0,1)^{k}}is a gaussian random vector inRk{\displaystyle \mathbb {R} ^{k}}, then as the dimensionk{\displaystyle k}grows, the squared length of the vector is concentrated tightly aroundk{\displaystyle k}with a widthk1/2+α{\displaystyle k^{1/2+\alpha }}:Pr(‖Z‖2∈[k−2k1/2+α,k+2k1/2+α+2kα])≥1−e−kα{\displaystyle Pr(\|Z\|^{2}\in [k-2k^{1/2+\alpha },k+2k^{1/2+\alpha }+2k^{\alpha }])\geq 1-e^{-k^{\alpha }}}where the exponentα{\displaystyle \alpha }can be chosen as any value inR{\displaystyle \mathbb {R} }.
Since the cumulant generating function forχ2(k){\displaystyle \chi ^{2}(k)}isK(t)=−k2ln(1−2t){\displaystyle K(t)=-{\frac {k}{2}}\ln(1-2t)}, and itsconvex dualisK∗(q)=12(q−k+klnkq){\displaystyle K^{*}(q)={\frac {1}{2}}(q-k+k\ln {\frac {k}{q}})}, the standardChernoff boundyieldslnPr(X≥(1+ϵ)k)≤−k2(ϵ−ln(1+ϵ))lnPr(X≤(1−ϵ)k)≤−k2(−ϵ−ln(1−ϵ)){\displaystyle {\begin{aligned}\ln Pr(X\geq (1+\epsilon )k)&\leq -{\frac {k}{2}}(\epsilon -\ln(1+\epsilon ))\\\ln Pr(X\leq (1-\epsilon )k)&\leq -{\frac {k}{2}}(-\epsilon -\ln(1-\epsilon ))\end{aligned}}}where0<ϵ<1{\displaystyle 0<\epsilon <1}. By the union bound,Pr(X∈(1±ϵ)k)≥1−2e−k2(12ϵ2−13ϵ3){\displaystyle Pr(X\in (1\pm \epsilon )k)\geq 1-2e^{-{\frac {k}{2}}({\frac {1}{2}}\epsilon ^{2}-{\frac {1}{3}}\epsilon ^{3})}}This result is used in proving theJohnson–Lindenstrauss lemma.[15]
By thecentral limit theorem, because the chi-squared distribution is the sum ofk{\displaystyle k}independent random variables with finite mean and variance, it converges to a normal distribution for largek{\displaystyle k}. For many practical purposes, fork>50{\displaystyle k>50}the distribution is sufficiently close to anormal distribution, so the difference is ignorable.[16]Specifically, ifX∼χ2(k){\displaystyle X\sim \chi ^{2}(k)}, then ask{\displaystyle k}tends to infinity, the distribution of(X−k)/2k{\displaystyle (X-k)/{\sqrt {2k}}}tendsto a standard normal distribution. However, convergence is slow as theskewnessis8/k{\displaystyle {\sqrt {8/k}}}and theexcess kurtosisis12/k{\displaystyle 12/k}.
The sampling distribution ofln(χ2){\displaystyle \ln(\chi ^{2})}converges to normality much faster than the sampling distribution ofχ2{\displaystyle \chi ^{2}},[17]as thelogarithmic transformremoves much of the asymmetry.[18]
Other functions of the chi-squared distribution converge more rapidly to a normal distribution. Some examples are:
A chi-squared variable withk{\displaystyle k}degrees of freedom is defined as the sum of the squares ofk{\displaystyle k}independentstandard normalrandom variables.
IfY{\displaystyle Y}is ak{\displaystyle k}-dimensional Gaussian random vector with mean vectorμ{\displaystyle \mu }and rankk{\displaystyle k}covariance matrixC{\displaystyle C}, thenX=(Y−μ)TC−1(Y−μ){\displaystyle X=(Y-\mu )^{T}C^{-1}(Y-\mu )}is chi-squared distributed withk{\displaystyle k}degrees of freedom.
The sum of squares ofstatistically independentunit-variance Gaussian variables which donothave mean zero yields a generalization of the chi-squared distribution called thenoncentral chi-squared distribution.
IfY{\displaystyle Y}is a vector ofk{\displaystyle k}i.i.d.standard normal random variables andA{\displaystyle A}is ak×k{\displaystyle k\times k}symmetric,idempotent matrixwithrankk−n{\displaystyle k-n}, then thequadratic formYTAY{\displaystyle Y^{T}AY}is chi-square distributed withk−n{\displaystyle k-n}degrees of freedom.
IfΣ{\displaystyle \Sigma }is ap×p{\displaystyle p\times p}positive-semidefinite covariance matrix with strictly positive diagonal entries, then forX∼N(0,Σ){\displaystyle X\sim N(0,\Sigma )}andw{\displaystyle w}a randomp{\displaystyle p}-vector independent ofX{\displaystyle X}such thatw1+⋯+wp=1{\displaystyle w_{1}+\cdots +w_{p}=1}andwi≥0,i=1,…,p,{\displaystyle w_{i}\geq 0,i=1,\ldots ,p,}then
The chi-squared distribution is also naturally related to other distributions arising from the Gaussian. In particular,
The chi-squared distribution is obtained as the sum of the squares ofkindependent, zero-mean, unit-variance Gaussian random variables. Generalizations of this distribution can be obtained by summing the squares of other types of Gaussian random variables. Several such distributions are described below.
IfX1,…,Xn{\displaystyle X_{1},\ldots ,X_{n}}are chi square random variables anda1,…,an∈R>0{\displaystyle a_{1},\ldots ,a_{n}\in \mathbb {R} _{>0}}, then the distribution ofX=∑i=1naiXi{\displaystyle X=\sum _{i=1}^{n}a_{i}X_{i}}is a special case of aGeneralized Chi-squared Distribution.
A closed expression for this distribution is not known. It may be, however, approximated efficiently using theproperty of characteristic functionsof chi-square random variables.[21]
The noncentral chi-squared distribution is obtained from the sum of the squares of independent Gaussian random variables having unit variance andnonzeromeans.
The generalized chi-squared distribution is obtained from the quadratic formz'Azwherezis a zero-mean Gaussian vector having an arbitrary covariance matrix, andAis an arbitrary matrix.
The chi-squared distributionX∼χk2{\displaystyle X\sim \chi _{k}^{2}}is a special case of thegamma distribution, in thatX∼Γ(k2,12){\displaystyle X\sim \Gamma \left({\frac {k}{2}},{\frac {1}{2}}\right)}using the rate parameterization of the gamma distribution (orX∼Γ(k2,2){\displaystyle X\sim \Gamma \left({\frac {k}{2}},2\right)}using the scale parameterization of the gamma distribution)
wherekis an integer.
Because theexponential distributionis also a special case of the gamma distribution, we also have that ifX∼χ22{\displaystyle X\sim \chi _{2}^{2}}, thenX∼exp(12){\displaystyle X\sim \operatorname {exp} \left({\frac {1}{2}}\right)}is anexponential distribution.
TheErlang distributionis also a special case of the gamma distribution and thus we also have that ifX∼χk2{\displaystyle X\sim \chi _{k}^{2}}with evenk{\displaystyle k}, thenX{\displaystyle X}is Erlang distributed with shape parameterk/2{\displaystyle k/2}and scale parameter1/2{\displaystyle 1/2}.
The chi-squared distribution has numerous applications in inferentialstatistics, for instance inchi-squared testsand in estimatingvariances. It enters the problem of estimating the mean of a normally distributed population and the problem of estimating the slope of aregressionline via its role inStudent's t-distribution. It enters allanalysis of varianceproblems via its role in theF-distribution, which is the distribution of the ratio of two independent chi-squaredrandom variables, each divided by their respective degrees of freedom.
Following are some of the most common situations in which the chi-squared distribution arises from a Gaussian-distributed sample.
The chi-squared distribution is also often encountered inmagnetic resonance imaging.[22]
Thep{\textstyle p}-valueis the probability of observing a test statisticat leastas extreme in a chi-squared distribution. Accordingly, since thecumulative distribution function(CDF) for the appropriate degrees of freedom(df)gives the probability of having obtained a valueless extremethan this point, subtracting the CDF value from 1 gives thep-value. A lowp-value, below the chosen significance level, indicatesstatistical significance, i.e., sufficient evidence to reject the null hypothesis. A significance level of 0.05 is often used as the cutoff between significant and non-significant results.
The table below gives a number ofp-values matching toχ2{\displaystyle \chi ^{2}}for the first 10 degrees of freedom.
These values can be calculated evaluating thequantile function(also known as "inverse CDF" or "ICDF") of the chi-squared distribution;[24]e. g., theχ2ICDF forp= 0.05anddf = 7yields2.1673 ≈ 2.17as in the table above, noticing that1 –pis thep-valuefrom the table.
This distribution was first described by the German geodesist and statisticianFriedrich Robert Helmertin papers of 1875–6,[25][26]where he computed the sampling distribution of the sample variance of a normal population. Thus in German this was traditionally known as theHelmert'sche("Helmertian") or "Helmert distribution".
The distribution was independently rediscovered by the English mathematicianKarl Pearsonin the context ofgoodness of fit, for which he developed hisPearson's chi-squared test, published in 1900, with computed table of values published in (Elderton 1902), collected in (Pearson 1914, pp. xxxi–xxxiii, 26–28, Table XII).
The name "chi-square" ultimately derives from Pearson's shorthand for the exponent in amultivariate normal distributionwith the Greek letterChi, writing−½χ2for what would appear in modern notation as−½xTΣ−1x(Σ being thecovariance matrix).[27]The idea of a family of "chi-squared distributions", however, is not due to Pearson but arose as a further development due to Fisher in the 1920s.[25]
|
https://en.wikipedia.org/wiki/Chi-squared_distribution
|
Instatistics, alatent class model(LCM) is a model for clustering multivariate discrete data. It assumes that the data arise from a mixture of discrete distributions, within each of which the variables are independent. It is called a latent class model because the class to which each data point belongs is unobserved, or latent.
Latent class analysis(LCA) is a subset ofstructural equation modeling, used to find groups or subtypes of cases in multivariatecategorical data. These subtypes are called "latent classes".[1][2]
Confronted with a situation as follows, a researcher might choose to use LCA to understand the data: Imagine that symptoms a-d have been measured in a range of patients with diseases X, Y, and Z, and that disease X is associated with the presence of symptoms a, b, and c, disease Y with symptoms b, c, d, and disease Z with symptoms a, c and d.
The LCA will attempt to detect the presence of latent classes (the disease entities), creating patterns of association in the symptoms. As infactor analysis, the LCA can also be used to classify case according to theirmaximum likelihoodclass membership.[1][3]
Because the criterion for solving the LCA is to achieve latent classes within which there is no longer any association of one symptom with another (because the class is the disease which causes their association), and the set of diseases a patient has (or class a case is a member of) causes the symptom association, the symptoms will be "conditionally independent", i.e., conditional on class membership, they are no longer related.[1]
Within each latent class, the observed variables arestatistically independent. This is an important aspect. Usually the observed variables are statistically dependent. By introducing the latent variable, independence is restored in the sense that within classes variables are independent (local independence). We then say that the association between the observed variables is explained by the classes of the latent variable (McCutcheon, 1987).
In one form, the latent class model is written as
whereT{\displaystyle T}is the number of latent classes andpt{\displaystyle p_{t}}are the so-called recruitment
or unconditional probabilities that should sum to one.pin,tn{\displaystyle p_{i_{n},t}^{n}}are the
marginal or conditional probabilities.
For a two-way latent class model, the form is
This two-way model is related toprobabilistic latent semantic analysisandnon-negative matrix factorization.
The probability model used in LCA is closely related to theNaive Bayes classifier. The main difference is that in LCA, the class membership of an individual is a latent variable, whereas in Naive Bayes classifiers the class membership is an observed label.
There are a number of methods with distinct names and uses that share a common relationship.Cluster analysisis, like LCA, used to discover taxon-like groups of cases in data. Multivariate mixture estimation (MME) is applicable to continuous data, and assumes that such data arise from a mixture of distributions: imagine a set of heights arising from a mixture of men and women. If a multivariate mixture estimation is constrained so that measures must be uncorrelated within each distribution, it is termedlatent profile analysis. Modified to handle discrete data, this constrained analysis is known as LCA. Discrete latent trait models further constrain the classes to form from segments of a single dimension: essentially allocating members to classes on that dimension: an example would be assigning cases to social classes on a dimension of ability or merit.
As a practical instance, the variables could bemultiple choiceitems of a political questionnaire. The data in this case consists of a N-waycontingency tablewith answers to the items for a number of respondents. In this example, the latent variable refers to political opinion and the latent classes to political groups. Given group membership, theconditional probabilitiesspecify the chance certain answers are chosen.
LCA may be used in many fields, such as:collaborative filtering,[4]Behavior Genetics[5]andEvaluation of diagnostic tests.[6]
|
https://en.wikipedia.org/wiki/Latent_class_model
|
Inmarketing,market segmentationorcustomer segmentationis the process of dividing a consumer or businessmarketinto meaningful sub-groups of current or potentialcustomers(orconsumers) known assegments.[1]Its purpose is to identify profitable and growing segments that a company can target with distinct marketing strategies.
In dividing or segmenting markets, researchers typically look for common characteristics such as shared needs, common interests, similar lifestyles, or even similardemographic profiles. The overall aim of segmentation is to identifyhigh-yield segments– that is, those segments that are likely to be the most profitable or that have growth potential – so that these can be selected for special attention (i.e. becometarget markets). Many different ways to segment a market have been identified.Business-to-business(B2B) sellers might segment the market into different types ofbusinessesorcountries, whilebusiness-to-consumer(B2C) sellers might segment the market intodemographicsegments, such as lifestyle, behavior, or socioeconomic status.
Market segmentation assumes that different market segments require different marketing programs – that is, different offers, prices, promotions, distribution, or some combination of marketing variables. Market segmentation is not only designed to identify the most profitable segments but also to develop profiles of key segments to better understand their needs and purchase motivations. Insights from segmentation analysis are subsequently used to support marketing strategy development and planning.
In practice, marketers implement market segmentation using theS-T-P framework,[2]which stands for Segmentation →Targeting→Positioning. That is, partitioning a market into one or more consumer categories, of which some are further selected for targeting, and products or services are positioned in a way that resonates with the selected target market or markets.
Market segmentation is the process of dividing mass markets into groups with similar needs and wants.[3]The rationale for market segmentation is that in order to achieve competitive advantage and superior performance, firms should: "(1) identify segments of industry demand, (2) target specific segments of demand, and (3) develop specific 'marketing mixes' for each targeted market segment. "[4]From an economic perspective, segmentation is built on the assumption that heterogeneity in demand allows for demand to be disaggregated into segments with distinct demand functions.[5]
The business historianRichard S. Tedlowidentifies four stages in the evolution of market segmentation:[6]
The practice of market segmentation emerged well before marketers thought about it at a theoretical level.[7]Archaeological evidence suggests that Bronze Age traders segmented trade routes according to geographical circuits.[8]Other evidence suggests that the practice of modern market segmentation was developed incrementally from the 16th century onwards. Retailers, operating outside the major metropolitan cities, could not afford to serve one type of clientele exclusively, yet retailers needed to find ways to separate the wealthier clientele from the "riff-raff". One simple technique was to have a window opening out onto the street from which customers could be served. This allowed the sale of goods to the common people, without encouraging them to come inside. Another solution, that came into vogue starting in the late sixteenth century, was to invite favored customers into a back room of the store, where goods were permanently on display. Yet another technique that emerged around the same time was to hold a showcase of goods in the shopkeeper's private home for the benefit of wealthier clients. Samuel Pepys, for example, writing in 1660, describes being invited to the home of a retailer to view a wooden jack.[9]The eighteenth-century English entrepreneurs,Josiah WedgewoodandMatthew Boulton, both staged expansive showcases of their wares in their private residences or in rented halls to which only the upper classes were invited while Wedgewood used a team of itinerant salesmen to sell wares to the masses.[10]
Evidence of early marketing segmentation has also been noted elsewhere in Europe. A study of the German book trade found examples of both product differentiation and market segmentation in the 1820s.[11]From the 1880s, German toy manufacturers were producing models oftin toysfor specific geographic markets; London omnibuses and ambulances destined for the British market; French postal delivery vans for Continental Europe and American locomotives intended for sale in America.[12]Such activities suggest that basic forms of market segmentation have been practiced since the 17th century and possibly earlier.
Contemporary market segmentation emerged in the first decades of the twentieth century as marketers responded to two pressing issues. Demographic and purchasing data were available for groups but rarely for individuals and secondly, advertising and distribution channels were available for groups, but rarely for single consumers. Between 1902 and 1910, George B Waldron, working at Mahin's Advertising Agency in the United States used tax registers, city directories, and census data to show advertisers the proportion of educated vs illiterate consumers and the earning capacity of different occupations, etc. in a very early example of simple market segmentation.[13][14]In 1924 Paul Cherington developed the 'ABCD' household typology; the first socio-demographic segmentation tool.[13][15]By the 1930s, market researchers such asErnest Dichterrecognized that demographics alone were insufficient to explain different marketing behaviors and began exploring the use of lifestyles, attitudes, values, beliefs and culture to segment markets.[16]With access to group-level data only, brand marketers approached the task from a tactical viewpoint. Thus, segmentation was essentially a brand-driven process.
Wendell R. Smith is generally credited with being the first to introduce the concept of market segmentation into the marketing literature in 1956 with the publication of his article, "Product Differentiation and Market Segmentation as Alternative Marketing Strategies."[17]Smith's article makes it clear that he had observed "many examples of segmentation" emerging and to a certain extent saw this as a "natural force" in the market that would "not be denied."[18]As Schwarzkopf points out, Smith was codifying implicit knowledge that had been used in advertising and brand management since at least the 1920s.[19]
Until relatively recently, most segmentation approaches have retained a tactical perspective in that they address immediate short-term decisions; such as describing the current “market served” and are concerned with informing marketing mix decisions. However, with the advent of digital communications and mass data storage, it has been possible for marketers to conceive of segmenting at the level of the individual consumer. Extensive data is now available to support segmentation in very narrow groups or even for a single customer, allowing marketers to devise a customized offer with an individual price that can be disseminated via real-time communications.[20]Some scholars have argued that the fragmentation of markets has rendered traditional approaches to market segmentation less useful.[21]
The limitations of conventional segmentation have been well documented in the literature.[22]
Market segmentation has many critics. Despite its limitations, market segmentation remains one of the enduring concepts in marketing and continues to be widely used in practice. One American study, for example, suggested that almost 60 percent of senior executives had used market segmentation in the past two years.[31]
A key consideration for marketers is whether they should segment. Depending on company philosophy, resources, product type, or market characteristics, a business may develop anundifferentiated approachordifferentiated approach. In an undifferentiated approach, the marketer ignores segmentation and develops a product that meets the needs of the largest number of buyers.[32]In a differentiated approach, the firm targets one or more market segments and develops separate offers for each segment.[32]
In consumer marketing, it is difficult to find examples of undifferentiated approaches. Even goods such assaltandsugar, which were once treated as commodities, are now highly differentiated. Consumers can purchase a variety of salt products; cooking salt, table salt, sea salt, rock salt, kosher salt, mineral salt, herbal or vegetable salts, iodized salt, salt substitutes, and many more. Sugar also comes in many different types -cane sugar,beet sugar,raw sugar, whiterefined sugar,brown sugar,caster sugar, sugar lumps, icing sugar (also known as milled sugar),sugar syrup,invert sugar, and a plethora of sugar substitutes includingsmart sugarwhich is essentially a blend of pure sugar and a sugar substitute. Each of these product types is designed to meet the needs of specific market segments. Invert sugar and sugar syrups, for example, are marketed to food manufacturers where they are used in the production of conserves, chocolate, and baked goods. Sugars marketed to consumers appeal to different usage segments – refined sugar is primarily for use on the table, while caster sugar and icing sugar are primarily designed for use in home-baked goods.
Many factors are likely to affect a company's segmentation strategy:[34]
The process of segmenting the market is deceptively simple. Marketers tend to use the so-calledS-T-P process, that isSegmentation→Targeting →Positioning, as a broad framework for simplifying the process.[1]Segmentation comprises identifying the market to be segmented; identification, selection, and application of bases to be used in that segmentation; and development of profiles. Targeting comprises an evaluation of each segment's attractiveness and selection of the segments to be targeted. Positioning comprises the identification of optimal positions and the development of the marketing program.
Perhaps the most important marketing decision a firm makes is the selection of one or more market segments on which to focus. A market segment is a portion of a larger market whose needs differ somewhat from the larger market. Since a market segment has unique needs, a firm that develops a total product focused solely on the needs of that segment will be able to meet the segment's desires better than a firm whose product or service attempts to meet the needs of multiple segments.[36]Current research shows that, in practice, firms apply three variations of theS-T-P framework: ad-hoc segmentation, syndicated segmentation, and feral segmentation.[30]
The market for any given product or service is known as themarket potentialor thetotal addressable market(TAM).Given that this is the market to be segmented, the market analyst should begin by identifying the size of the potential market. For existing products and services, estimating the size and value of the market potential is relatively straightforward. However, estimating the market potential can be very challenging when a product or service is new to the market and no historical data on which to base forecasts exists.
A basic approach is to first assess the size of the broad population, then estimate the percentage likely to use the product or service, and finally estimate the revenue potential.
Another approach is to use a historical analogy.[37]For example, the manufacturer of HDTV might assume that the number of consumers willing to adopt high-definition TV will be similar to the adoption rate for color TV. To support this type of analysis, data for household penetration of TV, Radio, PCs, and other communications technologies are readily available from government statistics departments. Finding useful analogies can be challenging because every market is unique. However, analogous product adoption and growth rates can provide the analyst with benchmark estimates and can be used to cross-validate other methods that might be used to forecast sales or market size.
A more robust technique for estimating the market potential is known as theBass diffusion model, the equation for which follows:[38]
Where:
The major challenge with the Bass model is estimating the parameters forpandq. However, the Bass model has been so widely used in empirical studies that the values ofpandqfor more than 50 consumer and industrial categories have been determined and are widely published in tables.[39]The average value forpis 0.037 and forqis 0.327.
A major step in the segmentation process is the selection of a suitable base. In this step, marketers are looking for a means of achieving internal homogeneity (similarity within the segments), and external heterogeneity (differences between segments).[40]In other words, they are searching for a process that minimizes differences between members of a segment and maximizes differences between each segment. In addition, the segmentation approach must yield segments that are meaningful for the specific marketing problem or situation. For example, a person's hair color may be a relevant base for a shampoo manufacturer, but it would not be relevant for a seller of financial services. Selecting the right base requires a good deal of thought and a basic understanding of the market to be segmented.
In reality, marketers can segment the market using any base or variable provided that it is identifiable, substantial, responsive, actionable, and stable.[41]
For example, although dress size is not a standard base for segmenting a market, some fashion houses have successfully segmented the market using women's dress size as a variable.[43]However, the most common bases for segmenting consumer markets include: geographics, demographics, psychographics, and behavior. Marketers normally select a single base for the segmentation analysis, although, some bases can be combined into a single segmentation with care. Combining bases is the foundation of an emerging form of segmentation known as ‘Hybrid Segmentation’ (see§ Hybrid segmentation). This approach seeks to deliver a single segmentation that is equally useful across multiple marketing functions such as brand positioning, product and service innovation as well as eCRM.
The following sections provide a description of the most common forms of consumer market segmentation.
Segmentation according to demography is based on consumer demographic variables such as age, income, family size, socio-economic status, etc.[44]Demographic segmentation assumes that consumers with similar demographic profiles will exhibit similar purchasing patterns, motivations, interests, and lifestyles and that these characteristics will translate into similar product/brand preferences.[45]In practice, demographic segmentation can potentially employ any variable that is used by the nation's census collectors. Examples of demographic variables and their descriptors include:
In practice, most demographic segmentation utilizes a combination of demographic variables.
The use of multiple segmentation variables normally requires the analysis of databases using sophisticated statistical techniques such as cluster analysis or principal components analysis. These types of analysis require very large sample sizes. However, data collection is expensive for individual firms. For this reason, many companies purchase data from commercial market research firms, many of whom develop proprietary software to interrogate the data.
The labels applied to some of the more popular demographic segments began to enter the popular lexicon in the 1980s.[51][52][53]These include the following:[54][55]
Geographic segmentation divides markets according to geographic criteria. In practice, markets can be segmented as broadly as continents and as narrowly as neighborhoods or postal codes.[56]Typical geographic variables include:
The geo-cluster approach (also calledgeodemographic segmentation) combines demographic data with geographic data to create richer, more detailed profiles.[57]Geo-cluster approaches are a consumer classification system designed for market segmentation and consumer profiling purposes. They classify residential regions or postcodes based on census and lifestyle characteristics obtained from a wide range of sources. This allows the segmentation of a population into smaller groups defined by individual characteristics such as demographic, socio-economic, or other shared socio-demographic characteristics.
Geographic segmentation may be considered the first step in international marketing, where marketers must decide whether to adapt their existing products and marketing programs to the unique needs of distinct geographic markets.[58]Tourism Marketing Boards often segment international visitors based on their country of origin.
Several proprietary geo-demographic packages are available for commercial use. Geographic segmentation is widely used in direct marketing campaigns to identify areas that are potential candidates for personal selling, letter-box distribution, or direct mail. Geo-cluster segmentation is widely used by Governments and public sector departments such as urban planning, health authorities, police, criminal justice departments, telecommunications, and public utility organizations such as water boards.[59]
Geo-demographic or geoclusters is a combination of geographic & demographic variables.
Psychographicsegmentation, which is sometimes called psychometric orlifestylesegmentation, is measured by studying the activities, interests, and opinions (AIOs) of customers. It considers how people spend their leisure,[60]and which external influences they are most responsive to and influenced by. Psychographics is a very widely used basis for segmentation because it enables marketers to identify tightly defined market segments and better understand consumer motivations for product or brand choice.
While many of these proprietary psychographic segmentation analyses are well-known, the majority of studies based on psychographics are custom-designed. That is, the segments are developed for individual products at a specific time. One common thread among psychographic segmentation studies is that they use quirky names to describe the segments.[61]
Behavioural segmentation divides consumers into groups according to their observed behaviours. Many marketers believe that behavioural variables are superior to demographics and geographics for building market segments,[62]and some analysts have suggested that behavioural segmentation is killing off demographics.[63]Typical behavioural variables and their descriptors include:[64]
Note that these descriptors are merely commonly used examples. Marketers customize the variables and descriptors for both local conditions and for specific applications. For example, in the health industry, planners often segment broad markets according to 'health consciousness' and identify low, moderate, and highly health-conscious segments. This is an applied example of behavioural segmentation, using attitude to a product or service as a key descriptor or variable which has been customized for the specific application.
Purchase or usage occasion segmentation focuses on analyzing occasions when consumers might purchase or consume a product. This approach customer-level and occasion-level segmentation models and provides an understanding of the individual customers’ needs, behaviour, and value under different occasions of usage and time. Unlike traditional segmentation models, this approach assigns more than one segment to each unique customer, depending on the current circumstances they are under.
Benefit segmentation (sometimes calledneeds-based segmentation) was developed by Grey Advertising in the late 1960s.[66]The benefits-sought by purchasers enables the market to be divided into segments with distinct needs, perceived value, benefits sought, or advantage that accrues from the purchase of a product or service. Marketers using benefit segmentation might develop products with different quality levels, performance, customer service, special features, or any other meaningful benefit and pitch different products at each of the segments identified. Benefit segmentation is one of the more commonly used approaches to segmentation and is widely used in many consumer markets including motor vehicles, fashion and clothing, furniture, consumer electronics, and holiday-makers.[67]
Loker and Purdue, for example, used benefit segmentation to segment the pleasure holiday travel market. The segments identified in this study were the naturalists, pure excitement seekers, and escapists.[68]
Attitudinal segmentation provides insight into the mindset of customers, especially the attitudes and beliefs that drive consumer decision-making and behaviour. An example of attitudinal segmentation comes from the UK's Department of Environment which segmented the British population into six segments, based on attitudes that drive behaviour relating to environmental protection:[69]
One of the difficulties organisations face when implementing segmentation into their business processes is that segmentations developed using a single variable base, e.g. attitudes, are useful only for specific business functions. As an example, segmentations driven by functional needs (e.g. “I want home appliances that are very quiet”) can provide clear direction for product development, but tell little about how to position brands, or who to target on the customer database and with what tonality of messaging.
Hybrid segmentation is a family of approaches that specifically addresses this issue by combining two or more variable bases into a single segmentation. This emergence has been driven by three factors. First, the development of more powerful AI and machine learning algorithms to help attribute segmentations to customer databases; second, the rapid increase in the breadth and depth of data that is available to commercial organisations; third, the increasing prevalence of customer databases amongst companies (which generates the commercial demand for segmentation to be used for different purposes).
A successful example of hybrid segmentation came from the travel company TUI, which in 2018 developed a hybrid segmentation using a combination of geo-demographics, high-level category attitudes, and more specific holiday-related needs.[70]Before the onset of Covid-19 travel restrictions, they credited this segmentation with having generated an incremental £50 million of revenue in the UK market alone in just over two years.[71]
In addition to geographics, demographics, psychographics, and behavioural bases, marketers occasionally turn to other means of segmenting the market or developing segment profiles.
A generation is defined as "a cohort of people born within a similar period (15 years at the upper end) who share a comparable age and life stage and who were shaped by a particular period (events, trends, and developments)."[72]Generational segmentation refers to the process of dividing and analyzing a population into cohorts based on their birth date. Generational segmentation assumes that people's values and attitudes are shaped by the key events that occurred during their lives and that these attitudes translate into product and brand preferences.
Demographers, studying population change, disagree about precise dates for each generation.[73]Dating is normally achieved by identifying population peaks or troughs, which can occur at different times in each country. For example, in Australia the post-war population boom peaked in 1960,[74]while the peak occurred somewhat later in the US and Europe,[75]with most estimates converging on 1964. Accordingly, Australian Boomers are normally defined as those born between 1945 and 1960; while American and European Boomers are normally defined as those born between 1946 and 1964. Thus, the generational segments and their dates discussed here must be taken as approximations only.
The primary generational segments identified by marketers are:[76]
Cultural segmentation is used to classify markets according to their cultural origin. Culture is a major dimension ofconsumer behaviourand can be used to enhance customer insight and as a component of predictive models. Cultural segmentation enables appropriate communications to be crafted for particular cultural communities. Cultural segmentation can be applied to existing customer data to measure market penetration in key cultural segments by product, brand, and channel as well as traditional measures of recency, frequency, and monetary value. These benchmarks form an important evidence base to guide strategic direction and tactical campaign activity, allowing engagement trends to be monitored over time.[78]
Cultural segmentation can be combined with other bases, especially geographics so that segments are mapped according to state, region, suburb, and neighborhood. This provides a geographical market view of population proportions and may be of benefit in selecting appropriately located premises, determining territory boundaries, and local marketing activities.
Census data is a valuable source of cultural data but cannot meaningfully be applied to individuals. Name analysis (onomastics) is the most reliable and efficient means of describing the cultural origin of individuals. The accuracy of using name analysis as a surrogate for cultural background in Australia is between 80 and 85%, after allowing for female name changes due to marriage, social or political reasons, or colonial influence. The extent of name data coverage means a user will code a minimum of 99% of individuals with their most likely ancestral origin.
Online market segmentation is similar to the traditional approaches in that the segments should be identifiable, substantial, accessible, stable, differentiable, and actionable.[79]Customer data stored in online data management systems such as aCRMorDMPenables the analysis and segmentation of consumers across a diverse set of attributes.[80]Forsyth et al., in an article 'Internet research' grouped current active online consumers into six groups: Simplifiers, Surfers, Bargainers, Connectors, Routiners, and Sportsters. The segments differ regarding four customers' behaviours, namely:[81]
For example,Simplifiersmake up over 50% of all online transactions. Their main characteristic is that they need easy (one-click) access to information and products as well as easy and quickly available service regarding products.Amazonis an example of a company that created an online environment for Simplifiers. They also 'dislike unsolicited e-mail, uninviting chat rooms, pop-up windows intended to encourage impulse buys, and other features that complicate their on- and off-line experience'. Surfers like to spend a lot of time online, thus companies must have a variety of products to offer and constant updates,Bargainersare looking for the best price, Connectors like to relate to others,Routinerswant content, andSportsterslike sport and entertainment sites.
Another major decision in developing the segmentation strategy is the selection of market segments that will become the focus of special attention (known astarget markets). The marketer faces important decisions:
When a marketer enters more than one market, the segments are often labeled theprimary target marketandsecondary target market.The primary market is the target market selected as the main focus of marketing activities. The secondary target market is likely to be a segment that is not as large as the primary market, but has growth potential. Alternatively, the secondary target group might consist of a small number of purchasers that account for a relatively high proportion of sales volume perhaps due to purchase value or purchase frequency.
In terms of evaluating markets, three core considerations are essential:[82]
There are no formulas for evaluating the attractiveness of market segments and a good deal of judgment must be exercised.[83]There are approaches to assist in evaluating market segments for overall attractiveness. The following lists a series of questions to evaluate target segments.
When the segments have been determined and separate offers developed for each of the core segments, the marketer's next task is to design a marketing program (also known as the marketing mix) that will resonate with the target market or markets. Developing the marketing program requires a deep knowledge of key market segments' purchasing habits, their preferred retail outlet, their media habits, and their price sensitivity. The marketing program for each brand or product should be based on the understanding of the target market (or target markets) revealed in the market profile.
Positioning is the final step in the S-T-P planning approach; Segmentation → Targeting → Positioning. It is a core framework for developing marketing plans and setting objectives. Positioning refers to decisions about how to present the offer in a way that resonates with the target market. During the research and analysis that forms the central part of segmentation and targeting, the marketer will gain insights into what motivates consumers to purchase a product or brand. These insights will form part of the positioning strategy.
According to advertising guru, David Ogilvy, "Positioning is the act of designing the company’s offering and image to occupy a distinctive place in the minds of the target market. The goal is to locate the brand in the minds of consumers to maximize the potential benefit to the firm. A good brand positioning helps guide marketing strategy by clarifying the brand’s essence, what goals it helps the consumer achieve, and how it does so in a unique way."[84]
The technique known as perceptual mapping is often used to understand consumers' mental representations of brands within a given category. Traditionally two variables (often, but not necessarily, price and quality) are used to construct the map. A sample of people in the target market are asked to explain where they would place various brands in terms of the selected variables. Results are averaged across all respondents, and results are plotted on a graph, as illustrated in the figure. The final map indicates how theaveragemember of the population views the brand that makes up a category and how each of the brands relates to other brands within the same category. While perceptual maps with two dimensions are common, multi-dimensional maps are also used.
There are different approaches to positioning:[85]
Segmenting business markets is more straightforward than segmenting consumer markets. Businesses may be segmented according to industry, business size, business location, turnover, number of employees, company technology, purchasing approach, or any other relevant variables.[86]The most widely used segmentation bases used in business to business markets are geographics and firmographics.[87]
The most widely used bases for segmenting business markets are:
The basic approach to retention-based segmentation is that a company tags each of its active customers on four axes:
This analysis of customer lifecycles is usually included in thegrowth planof a business to determine which tactics to implement to retain or let go of customers.[91]Tactics commonly used range from providing special customer discounts to sending customers communications that reinforce the value proposition of the given service.
The choice of an appropriate statistical method for the segmentation depends on numerous factors that may include, the broad approach (a-prioriorpost-hoc), the availability of data, time constraints, the marketer's skill level, and resources.[92]
A priori research occurs when "a theoretical framework is developed before the research is conducted".[93]In other words, the marketer has an idea about whether to segment the market geographically, demographically, psychographically or behaviourally before undertaking any research. For example, a marketer might want to learn more about the motivations and demographics of light and moderate users to understand what tactics could be used to increase usage rates. In this case, the target variable is known – the marketer has already segmented using a behavioural variable –user status. The next step would be to collect and analyze attitudinal data for light and moderate users. The typical analysis includes simple cross-tabulations, frequency distributions, and occasionally logistic regression or one of several proprietary methods.[94]
The main disadvantage of a-priori segmentation is that it does not explore other opportunities to identify market segments that could be more meaningful.
In contrast, post-hoc segmentation makes no assumptions about the optimal theoretical framework. Instead, the analyst's role is to determine the segments that are the most meaningful for a given marketing problem or situation. In this approach, the empirical data drives the segmentation selection. Analysts typically employ some type of clustering analysis or structural equation modeling to identify segments within the data. Post-hoc segmentation relies on access to rich datasets, usually with a very large number of cases, and uses sophisticated algorithms to identify segments.[95]
The figure alongside illustrates how segments might be formed using clustering; however, note that this diagram only uses two variables, while in practice clustering employs a large number of variables.[96]
Marketers often engage commercial research firms or consultancies to carry out segmentation analysis, especially if they lack the statistical skills to undertake the analysis. Some segmentation, especially post-hoc analysis, relies on sophisticated statistical analysis.
Common statistical approaches and techniques used in segmentation analysis include:
Marketers use a variety of data sources for segmentation studies and market profiling. Typical sources of information include:[108][109]
|
https://en.wikipedia.org/wiki/Market_segment
|
Instatistics, themultiple comparisons,multiplicityormultiple testing problemoccurs when one considers a set ofstatistical inferencessimultaneously[1]orestimatesa subset of parameters selected based on the observed values.[2]
The larger the number of inferences made, the more likely erroneous inferences become. Several statistical techniques have been developed to address this problem, for example, by requiring astricter significance thresholdfor individual comparisons, so as to compensate for the number of inferences being made. Methods forfamily-wise error rategive the probability of false positives resulting from the multiple comparisons problem.
The problem of multiple comparisons received increased attention in the 1950s with the work of statisticians such asTukeyandScheffé. Over the ensuing decades, many procedures were developed to address the problem. In 1996, the first international conference on multiple comparison procedures took place inTel Aviv.[3]This is an active research area with work being done by, for exampleEmmanuel CandèsandVladimir Vovk.
Multiple comparisons arise when a statistical analysis involves multiple simultaneous statistical tests, each of which has a potential to produce a "discovery". A stated confidence level generally applies only to each test considered individually, but often it is desirable to have a confidence level for the whole family of simultaneous tests.[4]Failure to compensate for multiple comparisons can have important real-world consequences, as illustrated by the following examples:
In both examples, as the number of comparisons increases, it becomes more likely that the groups being compared will appear to differ in terms of at least one attribute. Our confidence that a result will generalize to independent data should generally be weaker if it is observed as part of an analysis that involves multiple comparisons, rather than an analysis that involves only a single comparison.
For example, if one test is performed at the 5% level and the corresponding null hypothesis is true, there is only a 5% risk of incorrectly rejecting the null hypothesis. However, if 100 tests are each conducted at the 5% level and all corresponding null hypotheses are true, theexpected numberof incorrect rejections (also known asfalse positivesorType I errors) is 5. If the tests are statistically independent from each other (i.e. are performed on independent samples), the probability of at least one incorrect rejection is approximately 99.4%.
The multiple comparisons problem also applies toconfidence intervals. A single confidence interval with a 95%coverage probabilitylevel will contain the true value of the parameter in 95% of samples. However, if one considers 100 confidence intervals simultaneously, each with 95% coverage probability, the expected number of non-covering intervals is 5. If the intervals are statistically independent from each other, the probability that at least one interval does not contain the population parameter is 99.4%.
Techniques have been developed to prevent the inflation of false positive rates and non-coverage rates that occur with multiple statistical tests.
The following table defines the possible outcomes when testing multiple null hypotheses.
Suppose we have a numbermof null hypotheses, denoted by:H1,H2, ...,Hm.Using astatistical test, we reject the null hypothesis if the test is declared significant. We do not reject the null hypothesis if the test is non-significant.
Summing each type of outcome over allHiyields the following random variables:
Inmhypothesis tests of whichm0{\displaystyle m_{0}}are true null hypotheses,Ris an observable random variable, andS,T,U, andVare unobservablerandom variables.
Multiple testing correctionrefers to making statistical tests more stringent in order to counteract the problem of multiple testing. The best known such adjustment is theBonferroni correction, but other methods have been developed. Such methods are typically designed to control thefamily-wise error rateor thefalse discovery rate.
Ifmindependent comparisons are performed, thefamily-wise error rate(FWER), is given by
Hence, unless the tests are perfectly positively dependent (i.e., identical),α¯{\displaystyle {\bar {\alpha }}}increases as the number of comparisons increases.
If we do not assume that the comparisons are independent, then we can still say:
which follows fromBoole's inequality. Example:0.2649=1−(1−.05)6≤.05×6=0.3{\displaystyle 0.2649=1-(1-.05)^{6}\leq .05\times 6=0.3}
There are different ways to assure that the family-wise error rate is at mostα{\displaystyle \alpha }. The most conservative method, which is free of dependence and distributional assumptions, is theBonferroni correctionα{percomparison}=α/m{\displaystyle \alpha _{\mathrm {\{per\ comparison\}} }={\alpha }/m}. A marginally less conservative correction can be obtained by solving the equation for the family-wise error rate ofm{\displaystyle m}independent comparisons forα{percomparison}{\displaystyle \alpha _{\mathrm {\{per\ comparison\}} }}. This yieldsα{per comparison}=1−(1−α)1/m{\displaystyle \alpha _{\{{\text{per comparison}}\}}=1-{(1-{\alpha })}^{1/m}}, which is known as theŠidák correction. Another procedure is theHolm–Bonferroni method, which uniformly delivers more power than the simple Bonferroni correction, by testing only the lowest p-value (i=1{\displaystyle i=1}) against the strictest criterion, and the higher p-values (i>1{\displaystyle i>1}) against progressively less strict criteria.[5]α{percomparison}=α/(m−i+1){\displaystyle \alpha _{\mathrm {\{per\ comparison\}} }={\alpha }/(m-i+1)}.
For continuous problems, one can employBayesianlogic to computem{\displaystyle m}from the prior-to-posterior volume ratio. Continuous generalizations of theBonferroniandŠidák correctionare presented in.[6]
Traditional methods for multiple comparisons adjustments focus on correcting for modest numbers of comparisons, often in ananalysis of variance. A different set of techniques have been developed for "large-scale multiple testing", in which thousands or even greater numbers of tests are performed. For example, ingenomics, when using technologies such asmicroarrays, expression levels of tens of thousands of genes can be measured, and genotypes for millions of genetic markers can be measured. Particularly in the field ofgenetic associationstudies, there has been a serious problem with non-replication — a result being strongly statistically significant in one study but failing to be replicated in a follow-up study. Such non-replication can have many causes, but it is widely considered that failure to fully account for the consequences of making multiple comparisons is one of the causes.[7]It has been argued that advances inmeasurementandinformation technologyhave made it far easier to generate large datasets forexploratory analysis, often leading to the testing of large numbers of hypotheses with no prior basis for expecting many of the hypotheses to be true. In this situation, very highfalse positive ratesare expected unless multiple comparisons adjustments are made.
For large-scale testing problems where the goal is to provide definitive results, thefamily-wise error rateremains the most accepted parameter for ascribing significance levels to statistical tests. Alternatively, if a study is viewed as exploratory, or if significant results can be easily re-tested in an independent study, control of thefalse discovery rate(FDR)[8][9][10]is often preferred. The FDR, loosely defined as the expected proportion of false positives among all significant tests, allows researchers to identify a set of "candidate positives" that can be more rigorously evaluated in a follow-up study.[11]
The practice of trying many unadjusted comparisons in the hope of finding a significant one is a known problem, whether applied unintentionally or deliberately, is sometimes called "p-hacking".[12][13]
A basic question faced at the outset of analyzing a large set of testing results is whether there is evidence that any of the alternative hypotheses are true. One simple meta-test that can be applied when it is assumed that the tests are independent of each other is to use thePoisson distributionas a model for the number of significant results at a given level α that would be found when all null hypotheses are true.[citation needed]If the observed number of positives is substantially greater than what should be expected, this suggests that there are likely to be some true positives among the significant results.
For example, if 1000 independent tests are performed, each at level α = 0.05, we expect 0.05 × 1000 = 50 significant tests to occur when all null hypotheses are true. Based on the Poisson distribution with mean 50, the probability of observing more than 61 significant tests is less than 0.05, so if more than 61 significant results are observed, it is very likely that some of them correspond to situations where the alternative hypothesis holds. A drawback of this approach is that it overstates the evidence that some of the alternative hypotheses are true when thetest statisticsare positively correlated, which commonly occurs in practice.[citation needed]. On the other hand, the approach remains valid even in the presence of correlation among the test statistics, as long as the Poisson distribution can be shown to provide a good approximation for the number of significant results. This scenario arises, for instance, when mining significant frequent itemsets from transactional datasets. Furthermore, a careful two stage analysis can bound the FDR at a pre-specified level.[14]
Another common approach that can be used in situations where thetest statisticscan be standardized toZ-scoresis to make anormal quantile plotof the test statistics. If the observed quantiles are markedly moredispersedthan the normal quantiles, this suggests that some of the significant results may be true positives.[citation needed]
|
https://en.wikipedia.org/wiki/Multiple_comparisons
|
Decision tree learningis asupervised learningapproach used instatistics,data miningandmachine learning. In this formalism, a classification or regressiondecision treeis used as apredictive modelto draw conclusions about a set of observations.
Tree models where the target variable can take a discrete set of values are calledclassificationtrees; in these tree structures,leavesrepresent class labels and branches representconjunctionsof features that lead to those class labels. Decision trees where the target variable can take continuous values (typicallyreal numbers) are calledregressiontrees. More generally, the concept of regression tree can be extended to any kind of object equipped with pairwise dissimilarities such as categorical sequences.[1]
Decision trees are among the most popular machine learning algorithms given their intelligibility and simplicity because they produce models that are easy to interpret and visualize, even for users without a statistical background.[2]
In decision analysis, a decision tree can be used to visually and explicitly represent decisions anddecision making. Indata mining, a decision tree describes data (but the resulting classification tree can be an input for decision making).
Decision tree learning is a method commonly used in data mining.[3]The goal is to create a model that predicts the value of a target variable based on several input variables.
A decision tree is a simple representation for classifying examples. For this section, assume that all of the inputfeatureshave finite discrete domains, and there is a single target feature called the "classification". Each element of the domain of the classification is called aclass.
A decision tree or a classification tree is a tree in which each internal (non-leaf) node is labeled with an input feature. The arcs coming from a node labeled with an input feature are labeled with each of the possible values of the target feature or the arc leads to a subordinate decision node on a different input feature. Each leaf of the tree is labeled with a class or a probability distribution over the classes, signifying that the data set has been classified by the tree into either a specific class, or into a particular probability distribution (which, if the decision tree is well-constructed, is skewed towards certain subsets of classes).
A tree is built by splitting the sourceset, constituting the root node of the tree, into subsets—which constitute the successor children. The splitting is based on a set of splitting rules based on classification features.[4]This process is repeated on each derived subset in a recursive manner calledrecursive partitioning.
Therecursionis completed when the subset at a node has all the same values of the target variable, or when splitting no longer adds value to the predictions. This process oftop-down induction of decision trees(TDIDT)[5]is an example of agreedy algorithm, and it is by far the most common strategy for learning decision trees from data.[6]
Indata mining, decision trees can be described also as the combination of mathematical and computational techniques to aid the description, categorization and generalization of a given set of data.
Data comes in records of the form:
The dependent variable,Y{\displaystyle Y}, is the target variable that we are trying to understand, classify or generalize. The vectorx{\displaystyle {\textbf {x}}}is composed of the features,x1,x2,x3{\displaystyle x_{1},x_{2},x_{3}}etc., that are used for that task.
Decision trees used indata miningare of two main types:
The termclassification and regression tree (CART)analysis is anumbrella termused to refer to either of the above procedures, first introduced byBreimanet al. in 1984.[7]Trees used for regression and trees used for classification have some similarities – but also some differences, such as the procedure used to determine where to split.[7]
Some techniques, often calledensemblemethods, construct more than one decision tree:
A special case of a decision tree is adecision list,[14]which is a one-sided decision tree, so that every internal node has exactly 1 leaf node and exactly 1 internal node as a child (except for the bottommost node, whose only child is a single leaf node). While less expressive, decision lists are arguably easier to understand than general decision trees due to their added sparsity[citation needed], permit non-greedy learning methods[15]and monotonic constraints to be imposed.[16]
Notable decision tree algorithms include:
ID3 and CART were invented independently at around the same time (between 1970 and 1980)[citation needed], yet follow a similar approach for learning a decision tree from training tuples.
It has also been proposed to leverage concepts offuzzy set theoryfor the definition of a special version of decision tree, known as Fuzzy Decision Tree (FDT).[23]In this type of fuzzy classification, generally, an input vectorx{\displaystyle {\textbf {x}}}is associated with multiple classes, each with a different confidence value.
Boosted ensembles of FDTs have been recently investigated as well, and they have shown performances comparable to those of other very efficient fuzzy classifiers.[24]
Algorithms for constructing decision trees usually work top-down, by choosing a variable at each step that best splits the set of items.[6]Different algorithms use different metrics for measuring "best". These generally measure the homogeneity of the target variable within the subsets. Some examples are given below. These metrics are applied to each candidate subset, and the resulting values are combined (e.g., averaged) to provide a measure of the quality of the split. Depending on the underlying metric, the performance of various heuristic algorithms for decision tree learning may vary significantly.[25]
A simple and effective metric can be used to identify the degree to which true positives outweigh false positives (seeConfusion matrix). This metric, "Estimate of Positive Correctness" is defined below:
EP=TP−FP{\displaystyle E_{P}=TP-FP}
In this equation, the total false positives (FP) are subtracted from the total true positives (TP). The resulting number gives an estimate on how many positive examples the feature could correctly identify within the data, with higher numbers meaning that the feature could correctly classify more positive samples. Below is an example of how to use the metric when the full confusion matrix of a certain feature is given:
Feature A Confusion Matrix
Here we can see that the TP value would be 8 and the FP value would be 2 (the underlined numbers in the table). When we plug these numbers in the equation we are able to calculate the estimate:Ep=TP−FP=8−2=6{\displaystyle E_{p}=TP-FP=8-2=6}. This means that using the estimate on this feature would have it receive a score of 6.
However, it should be worth noting that this number is only an estimate. For example, if two features both had a FP value of 2 while one of the features had a higher TP value, that feature would be ranked higher than the other because the resulting estimate when using the equation would give a higher value. This could lead to some inaccuracies when using the metric if some features have more positive samples than others. To combat this, one could use a more powerful metric known asSensitivitythat takes into account the proportions of the values from the confusion matrix to give the actualtrue positive rate(TPR). The difference between these metrics is shown in the example below:
TPR=TP/(TP+FN)=8/(8+3)≈0.73{\displaystyle TPR=TP/(TP+FN)=8/(8+3)\approx 0.73}
TPR=TP/(TP+FN)=6/(6+2)=0.75{\displaystyle TPR=TP/(TP+FN)=6/(6+2)=0.75}
In this example, Feature A had an estimate of 6 and a TPR of approximately 0.73 while Feature B had an estimate of 4 and a TPR of 0.75. This shows that although the positive estimate for some feature may be higher, the more accurate TPR value for that feature may be lower when compared to other features that have a lower positive estimate. Depending on the situation and knowledge of the data and decision trees, one may opt to use the positive estimate for a quick and easy solution to their problem. On the other hand, a more experienced user would most likely prefer to use the TPR value to rank the features because it takes into account the proportions of the data and all the samples that should have been classified as positive.
Gini impurity,Gini's diversity index,[26]orGini-Simpson Indexin biodiversity research, is named after Italian mathematicianCorrado Giniand used by the CART (classification and regression tree) algorithm for classification trees. Gini impurity measures how often a randomly chosen element of a set would be incorrectly labeled if it were labeled randomly and independently according to the distribution of labels in the set. It reaches its minimum (zero) when all cases in the node fall into a single target category.
For a set of items withJ{\displaystyle J}classes and relative frequenciespi{\displaystyle p_{i}},i∈{1,2,...,J}{\displaystyle i\in \{1,2,...,J\}}, the probability of choosing an item with labeli{\displaystyle i}ispi{\displaystyle p_{i}}, and the probability of miscategorizing that item is∑k≠ipk=1−pi{\displaystyle \sum _{k\neq i}p_{k}=1-p_{i}}. The Gini impurity is computed by summing pairwise products of these probabilities for each class label:
The Gini impurity is also an information theoretic measure and corresponds toTsallis Entropywith deformation coefficientq=2{\displaystyle q=2}, which in physics is associated with the lack of information in out-of-equilibrium, non-extensive, dissipative and quantum systems. For the limitq→1{\displaystyle q\to 1}one recovers the usual Boltzmann-Gibbs or Shannon entropy. In this sense, the Gini impurity is nothing but a variation of the usual entropy measure for decision trees.
Used by theID3,C4.5and C5.0 tree-generation algorithms.Information gainis based on the concept ofentropyandinformation contentfrominformation theory.
Entropy is defined as below
wherep1,p2,…{\displaystyle p_{1},p_{2},\ldots }are fractions that add up to 1 and represent the percentage of each class present in the child node that results from a split in the tree.[27]
Averaging over the possible values ofA{\displaystyle A},
That is, the expected information gain is themutual information, meaning that on average, the reduction in the entropy ofTis the mutual information.
Information gain is used to decide which feature to split on at each step in building the tree. Simplicity is best, so we want to keep our tree small. To do so, at each step we should choose the split that results in the most consistent child nodes. A commonly used measure of consistency is calledinformationwhich is measured inbits. For each node of the tree, the information value "represents the expected amount of information that would be needed to specify whether a new instance should be classified yes or no, given that the example reached that node".[27]
Consider an example data set with four attributes:outlook(sunny, overcast, rainy),temperature(hot, mild, cool),humidity(high, normal), andwindy(true, false), with a binary (yes or no) target variable,play, and 14 data points. To construct a decision tree on this data, we need to compare the information gain of each of four trees, each split on one of the four features. The split with the highest information gain will be taken as the first split and the process will continue until all children nodes each have consistent data, or until the information gain is 0.
To find the information gain of the split usingwindy, we must first calculate the information in the data before the split. The original data contained nine yes's and five no's.
The split using the featurewindyresults in two children nodes, one for awindyvalue of true and one for awindyvalue of false. In this data set, there are six data points with a truewindyvalue, three of which have aplay(whereplayis the target variable) value of yes and three with aplayvalue of no. The eight remaining data points with awindyvalue of false contain two no's and six yes's. The information of thewindy=true node is calculated using the entropy equation above. Since there is an equal number of yes's and no's in this node, we have
For the node wherewindy=false there were eight data points, six yes's and two no's. Thus we have
To find the information of the split, we take the weighted average of these two numbers based on how many observations fell into which node.
Now we can calculate the information gain achieved by splitting on thewindyfeature.
To build the tree, the information gain of each possible first split would need to be calculated. The best first split is the one that provides the most information gain. This process is repeated for each impure node until the tree is complete. This example is adapted from the example appearing in Witten et al.[27]
Information gain is also known asShannon indexin bio diversity research.
Introduced in CART,[7]variance reduction is often employed in cases where the target variable is continuous (regression tree), meaning that use of many other metrics would first require discretization before being applied. The variance reduction of a nodeNis defined as the total reduction of the variance of the target variableYdue to the split at this node:
whereS{\displaystyle S},St{\displaystyle S_{t}}, andSf{\displaystyle S_{f}}are the set of presplit sample indices, set of sample indices for which the split test is true, and set of sample indices for which the split test is false, respectively. Each of the above summands are indeedvarianceestimates, though, written in a form without directly referring to the mean.
By replacing(yi−yj)2{\displaystyle (y_{i}-y_{j})^{2}}in the formula above with the dissimilaritydij{\displaystyle d_{ij}}between two objectsi{\displaystyle i}andj{\displaystyle j}, the variance reduction criterion applies to any kind of object for which pairwise dissimilarities can be computed.[1]
Used by CART in 1984,[28]the measure of "goodness" is a function that seeks to optimize the balance of a candidate split's capacity to create pure children with its capacity to create equally-sized children. This process is repeated for each impure node until the tree is complete. The functionφ(s∣t){\displaystyle \varphi (s\mid t)}, wheres{\displaystyle s}is a candidate split at nodet{\displaystyle t}, is defined as below
wheretL{\displaystyle t_{L}}andtR{\displaystyle t_{R}}are the left and right children of nodet{\displaystyle t}using splits{\displaystyle s}, respectively;PL{\displaystyle P_{L}}andPR{\displaystyle P_{R}}are the proportions of records int{\displaystyle t}intL{\displaystyle t_{L}}andtR{\displaystyle t_{R}}, respectively; andP(j∣tL){\displaystyle P(j\mid t_{L})}andP(j∣tR){\displaystyle P(j\mid t_{R})}are the proportions of classj{\displaystyle j}records intL{\displaystyle t_{L}}andtR{\displaystyle t_{R}}, respectively.
Consider an example data set with three attributes:savings(low, medium, high),assets(low, medium, high),income(numerical value), and a binary target variablecredit risk(good, bad) and 8 data points.[28]The full data is presented in the table below. To start a decision tree, we will calculate the maximum value ofφ(s∣t){\displaystyle \varphi (s\mid t)}using each feature to find which one will split the root node. This process will continue until all children are pure or allφ(s∣t){\displaystyle \varphi (s\mid t)}values are below a set threshold.
To findφ(s∣t){\displaystyle \varphi (s\mid t)}of the featuresavings, we need to note the quantity of each value. The original data contained three low's, three medium's, and two high's. Out of the low's, one had a goodcredit riskwhile out of the medium's and high's, 4 had a goodcredit risk. Assume a candidate splits{\displaystyle s}such that records with a lowsavingswill be put in the left child and all other records will be put into the right child.
To build the tree, the "goodness" of all candidate splits for the root node need to be calculated. The candidate with the maximum value will split the root node, and the process will continue for each impure node until the tree is complete.
Compared to other metrics such as information gain, the measure of "goodness" will attempt to create a more balanced tree, leading to more-consistent decision time. However, it sacrifices some priority for creating pure children which can lead to additional splits that are not present with other metrics.
Amongst other data mining methods, decision trees have various advantages:
Many data mining software packages provide implementations of one or more decision tree algorithms (e.g. random forest).
Open source examples include:
Notable commercial software:
In a decision tree, all paths from the root node to the leaf node proceed by way of conjunction, orAND. In a decision graph, it is possible to use disjunctions (ORs) to join two more paths together usingminimum message length(MML).[43]Decision graphs have been further extended to allow for previously unstated new attributes to be learnt dynamically and used at different places within the graph.[44]The more general coding scheme results in better predictive accuracy and log-loss probabilistic scoring.[citation needed]In general, decision graphs infer models with fewer leaves than decision trees.
Evolutionary algorithms have been used to avoid local optimal decisions and search the decision tree space with littlea prioribias.[45][46]
It is also possible for a tree to be sampled usingMCMC.[47]
The tree can be searched for in a bottom-up fashion.[48]Or several trees can be constructed parallelly to reduce the expected number of tests till classification.[38]
|
https://en.wikipedia.org/wiki/Classification_and_regression_tree
|
BrownBoostis aboostingalgorithm that may be robust tonoisy datasets. BrownBoost is an adaptive version of theboost by majorityalgorithm. As is the case for all boosting algorithms, BrownBoost is used in conjunction with othermachine learningmethods. BrownBoost was introduced byYoav Freundin 2001.[1]
AdaBoostperforms well on a variety of datasets; however, it can be shown that AdaBoost does not perform well on noisy data sets.[2]This is a result of AdaBoost's focus on examples that are repeatedly misclassified. In contrast, BrownBoost effectively "gives up" on examples that are repeatedly misclassified. The core assumption of BrownBoost is that noisy examples will be repeatedly mislabeled by the weak hypotheses and non-noisy examples will be correctly labeled frequently enough to not be "given up on." Thus only noisy examples will be "given up on," whereas non-noisy examples will contribute to the final classifier. In turn, if the final classifier is learned from the non-noisy examples, thegeneralization errorof the final classifier may be much better than if learned from noisy and non-noisy examples.
The user of the algorithm can set the amount of error to be tolerated in the training set. Thus, if the training set is noisy (say 10% of all examples are assumed to be mislabeled), the booster can be told to accept a 10% error rate. Since the noisy examples may be ignored, only the true examples will contribute to the learning process.
BrownBoost uses a non-convex potential loss function, thus it does not fit into theAdaBoostframework. The non-convex optimization provides a method to avoid overfitting noisy data sets. However, in contrast to boosting algorithms that analytically minimize a convex loss function (e.g.AdaBoostandLogitBoost), BrownBoost solves a system of two equations and two unknowns using standard numerical methods.
The only parameter of BrownBoost (c{\displaystyle c}in the algorithm) is the "time" the algorithm runs. The theory of BrownBoost states that each hypothesis takes a variable amount of time (t{\displaystyle t}in the algorithm) which is directly related to the weight given to the hypothesisα{\displaystyle \alpha }. The time parameter in BrownBoost is analogous to the number of iterationsT{\displaystyle T}in AdaBoost.
A larger value ofc{\displaystyle c}means that BrownBoost will treat the data as if it were less noisy and therefore will give up on fewer examples. Conversely, a smaller value ofc{\displaystyle c}means that BrownBoost will treat the data as more noisy and give up on more examples.
During each iteration of the algorithm, a hypothesis is selected with some advantage over random guessing. The weight of this hypothesisα{\displaystyle \alpha }and the "amount of time passed"t{\displaystyle t}during the iteration are simultaneously solved in a system of two non-linear equations ( 1. uncorrelated hypothesis w.r.t example weights and 2. hold the potential constant) with two unknowns (weight of hypothesisα{\displaystyle \alpha }and time passedt{\displaystyle t}). This can be solved by bisection (as implemented in theJBoostsoftware package) orNewton's method(as described in the original paper by Freund). Once these equations are solved, the margins of each example (ri(xj){\displaystyle r_{i}(x_{j})}in the algorithm) and the amount of time remainings{\displaystyle s}are updated appropriately. This process is repeated until there is no time remaining.
The initial potential is defined to be1m∑j=1m1−erf(c)=1−erf(c){\displaystyle {\frac {1}{m}}\sum _{j=1}^{m}1-{\mbox{erf}}({\sqrt {c}})=1-{\mbox{erf}}({\sqrt {c}})}. Since a constraint of each iteration is that the potential be held constant, the final potential is1m∑j=1m1−erf(ri(xj)/c)=1−erf(c){\displaystyle {\frac {1}{m}}\sum _{j=1}^{m}1-{\mbox{erf}}(r_{i}(x_{j})/{\sqrt {c}})=1-{\mbox{erf}}({\sqrt {c}})}. Thus the final error islikelyto be near1−erf(c){\displaystyle 1-{\mbox{erf}}({\sqrt {c}})}. However, the final potential function is not the 0–1 loss error function. For the final error to be exactly1−erf(c){\displaystyle 1-{\mbox{erf}}({\sqrt {c}})}, the variance of the loss function must decrease linearly w.r.t. time to form the 0–1 loss function at the end of boosting iterations. This is not yet discussed in the literature and is not in the definition of the algorithm below.
The final classifier is a linear combination of weak hypotheses and is evaluated in the same manner as most other boosting algorithms.
Input:
Initialise:
Whiles>0{\displaystyle s>0}:
Output:H(x)=sign(∑iαihi(x)){\displaystyle H(x)={\textrm {sign}}\left(\sum _{i}\alpha _{i}h_{i}(x)\right)}
In preliminary experimental results with noisy datasets, BrownBoost outperformedAdaBoost's generalization error; however,LogitBoostperformed as well as BrownBoost.[4]An implementation of BrownBoost can be found in the open source softwareJBoost.
|
https://en.wikipedia.org/wiki/BrownBoost
|
Themultiplicative weights update methodis analgorithmic techniquemost commonly used for decision making and prediction, and also widely deployed in game theory and algorithm design. The simplest use case is the problem of prediction from expert advice, in which a decision maker needs to iteratively decide on an expert whose advice to follow. The method assigns initial weights to the experts (usually identical initial weights), and updates these weights multiplicatively and iteratively according to the feedback of how well an expert performed: reducing it in case of poor performance, and increasing it otherwise.[1]It was discovered repeatedly in very diverse fields such as machine learning (AdaBoost,Winnow, Hedge),optimization(solvinglinear programs), theoretical computer science (devising fast algorithm forLPsandSDPs), andgame theory.
"Multiplicative weights" implies the iterative rule used in algorithms derived from the multiplicative weight update method.[2]It is given with different names in the different fields where it was discovered or rediscovered.
The earliest known version of this technique was in an algorithm named "fictitious play" which was proposed ingame theoryin the early 1950s. Grigoriadis and Khachiyan[3]applied a randomized variant of "fictitious play" to solve two-playerzero-sum gamesefficiently using the multiplicative weights algorithm. In this case, player allocates higher weight to the actions that had a better outcome and choose his strategy relying on these weights. Inmachine learning, Littlestone applied the earliest form of the multiplicative weights update rule in his famouswinnow algorithm, which is similar to Minsky and Papert's earlierperceptron learning algorithm. Later, he generalized the winnow algorithm to weighted majority algorithm. Freund and Schapire followed his steps and generalized the winnow algorithm in the form of hedge algorithm.
The multiplicative weights algorithm is also widely applied incomputational geometrysuch asKenneth Clarkson'salgorithm forlinear programming (LP)with a bounded number of variables in linear time.[4][5]Later, Bronnimann and Goodrich employed analogous methods to findset coversforhypergraphswith smallVC dimension.[6]
Inoperations researchand on-line statistical decision making problem field, the weighted majority algorithm and its more complicated versions have been found independently.
In computer science field, some researchers have previously observed the close relationships between multiplicative update algorithms used in different contexts. Young discovered the similarities between fast LP algorithms and Raghavan's method of pessimistic estimators for derandomization of randomized rounding algorithms; Klivans and Servedio linked boosting algorithms in learning theory to proofs of Yao's XOR Lemma; Garg and Khandekar defined a common framework for convex optimization problems that contains Garg-Konemann and Plotkin-Shmoys-Tardos as subcases.[1]
The Hedge algorithm is a special case ofmirror descent.
A binary decision needs to be made based on n experts’ opinions to attain an associated payoff. In the first round, all experts’ opinions have the same weight. The decision maker will make the first decision based on the majority of the experts' prediction. Then, in each successive round, the decision maker will repeatedly update the weight of each expert's opinion depending on the correctness of his prior predictions. Real life examples includes predicting if it is rainy tomorrow or if the stock market will go up or go down.
Given a sequential game played between an adversary and an aggregator who is advised by N experts, the goal is for the aggregator to make as few mistakes as possible. Assume there is an expert among the N experts who always gives the correct prediction. In the halving algorithm, only the consistent experts are retained. Experts who make mistakes will be dismissed. For every decision, the aggregator decides by taking a majority vote among the remaining experts. Therefore, every time the aggregator makes a mistake, at least half of the remaining experts are dismissed. The aggregator makes at mostlog2(N)mistakes.[2]
Source:[1][7]
Unlike halving algorithm which dismisses experts who have made mistakes, weighted majority algorithm discounts their advice. Given the same "expert advice" setup, suppose we have n decisions, and we need to select one decision for each loop. In each loop, every decision incurs a cost. All costs will be revealed after making the choice. The cost is 0 if the expert is correct, and 1 otherwise. this algorithm's goal is to limit its cumulative losses to roughly the same as the best of experts.
The very first algorithm that makes choice based on majority vote every iteration does not work since the majority of the experts can be wrong consistently every time. The weighted majority algorithm corrects above trivial algorithm by keeping a weight of experts instead of fixing the cost at either 1 or 0.[1]This would make fewer mistakes compared to halving algorithm.
Ifη=0{\displaystyle \eta =0}, the weight of the expert's advice will remain the same. Whenη{\displaystyle \eta }increases, the weight of the expert's advice will decrease. Note that some researchers fixη=1/2{\displaystyle \eta =1/2}in weighted majority algorithm.
AfterT{\displaystyle T}steps, letmiT{\displaystyle m_{i}^{T}}be the number of mistakes of expert i andMT{\displaystyle M^{T}}be the number of mistakes our algorithm has made. Then we have the following bound for everyi{\displaystyle i}:
In particular, this holds for i which is the best expert. Since the best expert will have the leastmiT{\displaystyle m_{i}^{T}}, it will give the best bound on the number of mistakes made by the algorithm as a whole.
This algorithm can be understood as follows:[2][8]
Given the same setup with N experts. Consider the special situation where the proportions of experts predicting positive and negative, counting the weights, are both close to 50%. Then, there might be a tie. Following the weight update rule in weighted majority algorithm, the predictions made by the algorithm would be randomized. The algorithm calculates the probabilities of experts predicting positive or negatives, and then makes a random decision based on the computed fraction:[further explanation needed]
predict
where
The number of mistakes made by the randomized weighted majority algorithm is bounded as:
whereαβ=ln(1β)1−β{\displaystyle \alpha _{\beta }={\frac {\ln({\frac {1}{\beta }})}{1-\beta }}}andcβ=11−β{\displaystyle c_{\beta }={\frac {1}{1-\beta }}}.
Note that only the learning algorithm is randomized. The underlying assumption is that the examples and experts' predictions are not random. The only randomness is the randomness where the learner makes his own prediction.
In this randomized algorithm,αβ→1{\displaystyle \alpha _{\beta }\rightarrow 1}ifβ→1{\displaystyle \beta \rightarrow 1}. Compared to weighted algorithm, this randomness halved the number of mistakes the algorithm is going to make.[9]However, it is important to note that in some research, people defineη=1/2{\displaystyle \eta =1/2}in weighted majority algorithm and allow0≤η≤1{\displaystyle 0\leq \eta \leq 1}inrandomized weighted majority algorithm.[2]
The multiplicative weights method is usually used to solve a constrained optimization problem. Let each expert be the constraint in the problem, and the events represent the points in the area of interest. The punishment of the expert corresponds to how well its corresponding constraint is satisfied on the point represented by an event.[1]
Source:[1][9]
Suppose we were given the distributionP{\displaystyle P}on experts. LetA{\displaystyle A}= payoff matrix of a finite two-player zero-sum game, withn{\displaystyle n}rows.
When the row playerpr{\displaystyle p_{r}}uses plani{\displaystyle i}and the column playerpc{\displaystyle p_{c}}uses planj{\displaystyle j}, the payoff of playerpc{\displaystyle p_{c}}isA(i,j){\displaystyle A\left(i,j\right)}≔Aij{\displaystyle A_{ij}}, assumingA(i,j)∈[0,1]{\displaystyle A\left(i,j\right)\in \left[0,1\right]}.
If playerpr{\displaystyle p_{r}}chooses actioni{\displaystyle i}from a distributionP{\displaystyle P}over the rows, then the expected result for playerpc{\displaystyle p_{c}}selecting actionj{\displaystyle j}isA(P,j)=Ei∈P[A(i,j)]{\displaystyle A\left(P,j\right)=E_{i\in P}\left[A\left(i,j\right)\right]}.
To maximizeA(P,j){\displaystyle A\left(P,j\right)}, playerpc{\displaystyle p_{c}}should choose planj{\displaystyle j}. Similarly, the expected payoff for playerpl{\displaystyle p_{l}}isA(i,P)=Ej∈P[A(i,j)]{\displaystyle A\left(i,P\right)=E_{j\in P}\left[A\left(i,j\right)\right]}. Choosing plani{\displaystyle i}would minimize this payoff. By John Von Neumann's Min-Max Theorem, we obtain:
where P and i changes over the distributions over rows, Q and j changes over the columns.
Then, letλ∗{\displaystyle \lambda ^{*}}denote the common value of above quantities, also named as the "value of the game". Letδ>0{\displaystyle \delta >0}be an error parameter. To solve the zero-sum game bounded by additive error ofδ{\displaystyle \delta },
So there is an algorithm solving zero-sum game up to an additive factor of δ using O(log2(n)/δ2{\displaystyle \delta ^{2}}) calls to ORACLE, with an additional processing time of O(n) per call[9]
Bailey and Piliouras showed that although the time average behavior of multiplicative weights update converges to Nash equilibria in zero-sum games the day-to-day (last iterate) behavior diverges away from it.[10]
In machine learning, Littlestone and Warmuth generalized the winnow algorithm to the weighted majority algorithm.[11]Later, Freund and Schapire generalized it in the form of hedge algorithm.[12]AdaBoost Algorithm formulated by Yoav Freund and Robert Schapire also employed the Multiplicative Weight Update Method.[1]
Based on current knowledge in algorithms, the multiplicative weight update method was first used in Littlestone's winnow algorithm.[1]It is used in machine learning to solve a linear program.
Givenm{\displaystyle m}labeled examples(a1,l1),…,(am,lm){\displaystyle \left(a_{1},l_{1}\right),{\text{…}},\left(a_{m},l_{m}\right)}whereaj∈Rn{\displaystyle a_{j}\in \mathbb {R} ^{n}}are feature vectors, andlj∈{−1,1}{\displaystyle l_{j}\in \left\{-1,1\right\}\quad }are their labels.
The aim is to find non-negative weights such that for all examples, the sign of the weighted combination of the features matches its labels. That is, require thatljajx≥0{\displaystyle l_{j}a_{j}x\geq 0}for allj{\displaystyle j}. Without loss of generality, assume the total weight is 1 so that they form a distribution. Thus, for notational convenience, redefineaj{\displaystyle a_{j}}to beljaj{\displaystyle l_{j}a_{j}}, the problem reduces to finding a solution to the following LP:
This is general form of LP.
Source:[2]
The hedge algorithm is similar to the weighted majority algorithm. However, their exponential update rules are different.[2]It is generally used to solve the problem of binary allocation in which we need to allocate different portion of resources into N different options. The loss with every option is available at the end of every iteration. The goal is to reduce the total loss suffered for a particular allocation. The allocation for the following iteration is then revised, based on the total loss suffered in the current iteration using multiplicative update.[13]
Assume the learning rateη>0{\displaystyle \eta >0}and fort∈[T]{\displaystyle t\in [T]},pt{\displaystyle p^{t}}is picked by Hedge. Then for all expertsi{\displaystyle i},
Initialization: Fix anη>0{\displaystyle \eta >0}. For each expert, associate the weightwi1{\displaystyle w_{i}^{1}}≔1Fort=1,2,...,T:
This algorithm[12]maintains a set of weightswt{\displaystyle w^{t}}over the training examples. On every iterationt{\displaystyle t}, a distributionpt{\displaystyle p^{t}}is computed by normalizing these weights. This distribution is fed to the weak learner WeakLearn which generates a hypothesisht{\displaystyle h_{t}}that (hopefully) has small error with respect to the distribution. Using the new hypothesisht{\displaystyle h_{t}}, AdaBoost generates the next weight vectorwt+1{\displaystyle w^{t+1}}. The process repeats. After T such iterations, the final hypothesishf{\displaystyle h_{f}}is the output. The hypothesishf{\displaystyle h_{f}}combines the outputs of the T weak hypotheses using a weighted majority vote.[12]
Source:[14]
Given am×n{\displaystyle m\times n}matrixA{\displaystyle A}andb∈Rn{\displaystyle b\in \mathbb {R} ^{n}}, is there ax{\displaystyle x}such thatAx≥b{\displaystyle Ax\geq b}?
Using the oracle algorithm in solving zero-sum problem, with an error parameterϵ>0{\displaystyle \epsilon >0}, the output would either be a pointx{\displaystyle x}such thatAx≥b−ϵ{\displaystyle Ax\geq b-\epsilon }or a proof thatx{\displaystyle x}does not exist, i.e., there is no solution to this linear system of inequalities.
Given vectorp∈Δn{\displaystyle p\in \Delta _{n}}, solves the following relaxed problem
If there exists a x satisfying (1), then x satisfies (2) for allp∈Δn{\displaystyle p\in \Delta _{n}}. The contrapositive of this statement is also true.
Suppose if oracle returns a feasible solution for ap{\displaystyle p}, the solutionx{\displaystyle x}it returns has bounded widthmaxi|(Ax)i−bi|≤1{\displaystyle \max _{i}|{(Ax)}_{i}-b_{i}|\leq 1}.
So if there is a solution to (1), then there is an algorithm that its output x satisfies the system (2) up to an additive error of2ϵ{\displaystyle 2\epsilon }. The algorithm makes at mostln(m)ϵ2{\displaystyle {\frac {\ln(m)}{\epsilon ^{2}}}}calls to a width-bounded oracle for the problem (2). The contrapositive stands true as well. The multiplicative updates is applied in the algorithm in this case.
|
https://en.wikipedia.org/wiki/Multiplicative_weight_update_method#AdaBoost_algorithm
|
Machine learning(ML) is afield of studyinartificial intelligenceconcerned with the development and study ofstatistical algorithmsthat can learn fromdataandgeneraliseto unseen data, and thus performtaskswithout explicitinstructions.[1]Within a subdiscipline in machine learning, advances in the field ofdeep learninghave allowedneural networks, a class of statistical algorithms, to surpass many previous machine learning approaches in performance.[2]
ML finds application in many fields, includingnatural language processing,computer vision,speech recognition,email filtering,agriculture, andmedicine.[3][4]The application of ML to business problems is known aspredictive analytics.
Statisticsandmathematical optimisation(mathematical programming) methods comprise the foundations of machine learning.Data miningis a related field of study, focusing onexploratory data analysis(EDA) viaunsupervised learning.[6][7]
From a theoretical viewpoint,probably approximately correct learningprovides a framework for describing machine learning.
The termmachine learningwas coined in 1959 byArthur Samuel, anIBMemployee and pioneer in the field ofcomputer gamingandartificial intelligence.[8][9]The synonymself-teaching computerswas also used in this time period.[10][11]
Although the earliest machine learning model was introduced in the 1950s whenArthur Samuelinvented aprogramthat calculated the winning chance in checkers for each side, the history of machine learning roots back to decades of human desire and effort to study human cognitive processes.[12]In 1949,CanadianpsychologistDonald Hebbpublished the bookThe Organization of Behavior, in which he introduced atheoretical neural structureformed by certain interactions amongnerve cells.[13]Hebb's model ofneuronsinteracting with one another set a groundwork for how AIs and machine learning algorithms work under nodes, orartificial neuronsused by computers to communicate data.[12]Other researchers who have studied humancognitive systemscontributed to the modern machine learning technologies as well, including logicianWalter PittsandWarren McCulloch, who proposed the early mathematical models of neural networks to come up withalgorithmsthat mirror human thought processes.[12]
By the early 1960s, an experimental "learning machine" withpunched tapememory, called Cybertron, had been developed byRaytheon Companyto analysesonarsignals,electrocardiograms, and speech patterns using rudimentaryreinforcement learning. It was repetitively "trained" by a human operator/teacher to recognise patterns and equipped with a "goof" button to cause it to reevaluate incorrect decisions.[14]A representative book on research into machine learning during the 1960s was Nilsson's book on Learning Machines, dealing mostly with machine learning for pattern classification.[15]Interest related to pattern recognition continued into the 1970s, as described by Duda and Hart in 1973.[16]In 1981 a report was given on using teaching strategies so that anartificial neural networklearns to recognise 40 characters (26 letters, 10 digits, and 4 special symbols) from a computer terminal.[17]
Tom M. Mitchellprovided a widely quoted, more formal definition of the algorithms studied in the machine learning field: "A computer program is said to learn from experienceEwith respect to some class of tasksTand performance measurePif its performance at tasks inT, as measured byP, improves with experienceE."[18]This definition of the tasks in which machine learning is concerned offers a fundamentallyoperational definitionrather than defining the field in cognitive terms. This followsAlan Turing's proposal in his paper "Computing Machinery and Intelligence", in which the question "Can machines think?" is replaced with the question "Can machines do what we (as thinking entities) can do?".[19]
Modern-day machine learning has two objectives. One is to classify data based on models which have been developed; the other purpose is to make predictions for future outcomes based on these models. A hypothetical algorithm specific to classifying data may use computer vision of moles coupled with supervised learning in order to train it to classify the cancerous moles. A machine learning algorithm for stock trading may inform the trader of future potential predictions.[20]
As a scientific endeavour, machine learning grew out of the quest forartificial intelligence(AI). In the early days of AI as anacademic discipline, some researchers were interested in having machines learn from data. They attempted to approach the problem with various symbolic methods, as well as what were then termed "neural networks"; these were mostlyperceptronsandother modelsthat were later found to be reinventions of thegeneralised linear modelsof statistics.[22]Probabilistic reasoningwas also employed, especially inautomated medical diagnosis.[23]: 488
However, an increasing emphasis on thelogical, knowledge-based approachcaused a rift between AI and machine learning. Probabilistic systems were plagued by theoretical and practical problems of data acquisition and representation.[23]: 488By 1980,expert systemshad come to dominate AI, and statistics was out of favour.[24]Work on symbolic/knowledge-based learning did continue within AI, leading toinductive logic programming(ILP), but the more statistical line of research was now outside the field of AI proper, inpattern recognitionandinformation retrieval.[23]: 708–710, 755Neural networks research had been abandoned by AI andcomputer sciencearound the same time. This line, too, was continued outside the AI/CS field, as "connectionism", by researchers from other disciplines includingJohn Hopfield,David Rumelhart, andGeoffrey Hinton. Their main success came in the mid-1980s with the reinvention ofbackpropagation.[23]: 25
Machine learning (ML), reorganised and recognised as its own field, started to flourish in the 1990s. The field changed its goal from achieving artificial intelligence to tackling solvable problems of a practical nature. It shifted focus away from thesymbolic approachesit had inherited from AI, and toward methods and models borrowed from statistics,fuzzy logic, andprobability theory.[24]
There is a close connection between machine learning and compression. A system that predicts theposterior probabilitiesof a sequence given its entire history can be used for optimal data compression (by usingarithmetic codingon the output distribution). Conversely, an optimal compressor can be used for prediction (by finding the symbol that compresses best, given the previous history). This equivalence has been used as a justification for using data compression as a benchmark for "general intelligence".[25][26][27]
An alternative view can show compression algorithms implicitly map strings into implicitfeature space vectors, and compression-based similarity measures compute similarity within these feature spaces. For each compressor C(.) we define an associated vector space ℵ, such that C(.) maps an input string x, corresponding to the vector norm ||~x||. An exhaustive examination of the feature spaces underlying all compression algorithms is precluded by space; instead, feature vectors chooses to examine three representative lossless compression methods, LZW, LZ77, and PPM.[28]
According toAIXItheory, a connection more directly explained inHutter Prize, the best possible compression of x is the smallest possible software that generates x. For example, in that model, a zip file's compressed size includes both the zip file and the unzipping software, since you can not unzip it without both, but there may be an even smaller combined form.
Examples of AI-powered audio/video compression software includeNVIDIA Maxine, AIVC.[29]Examples of software that can perform AI-powered image compression includeOpenCV,TensorFlow,MATLAB's Image Processing Toolbox (IPT) and High-Fidelity Generative Image Compression.[30]
Inunsupervised machine learning,k-means clusteringcan be utilized to compress data by grouping similar data points into clusters. This technique simplifies handling extensive datasets that lack predefined labels and finds widespread use in fields such asimage compression.[31]
Data compression aims to reduce the size of data files, enhancing storage efficiency and speeding up data transmission. K-means clustering, an unsupervised machine learning algorithm, is employed to partition a dataset into a specified number of clusters, k, each represented by thecentroidof its points. This process condenses extensive datasets into a more compact set of representative points. Particularly beneficial inimageandsignal processing, k-means clustering aids in data reduction by replacing groups of data points with their centroids, thereby preserving the core information of the original data while significantly decreasing the required storage space.[32]
Machine learning anddata miningoften employ the same methods and overlap significantly, but while machine learning focuses on prediction, based onknownproperties learned from the training data, data mining focuses on thediscoveryof (previously)unknownproperties in the data (this is the analysis step ofknowledge discoveryin databases). Data mining uses many machine learning methods, but with different goals; on the other hand, machine learning also employs data mining methods as "unsupervised learning" or as a preprocessing step to improve learner accuracy. Much of the confusion between these two research communities (which do often have separate conferences and separate journals,ECML PKDDbeing a major exception) comes from the basic assumptions they work with: in machine learning, performance is usually evaluated with respect to the ability toreproduce knownknowledge, while in knowledge discovery and data mining (KDD) the key task is the discovery of previouslyunknownknowledge. Evaluated with respect to known knowledge, an uninformed (unsupervised) method will easily be outperformed by other supervised methods, while in a typical KDD task, supervised methods cannot be used due to the unavailability of training data.
Machine learning also has intimate ties tooptimisation: Many learning problems are formulated as minimisation of someloss functionon a training set of examples. Loss functions express the discrepancy between the predictions of the model being trained and the actual problem instances (for example, in classification, one wants to assign alabelto instances, and models are trained to correctly predict the preassigned labels of a set of examples).[35]
Characterizing the generalisation of various learning algorithms is an active topic of current research, especially fordeep learningalgorithms.
Machine learning andstatisticsare closely related fields in terms of methods, but distinct in their principal goal: statistics draws populationinferencesfrom asample, while machine learning finds generalisable predictive patterns.[36]According toMichael I. Jordan, the ideas of machine learning, from methodological principles to theoretical tools, have had a long pre-history in statistics.[37]He also suggested the termdata scienceas a placeholder to call the overall field.[37]
Conventional statistical analyses require the a priori selection of a model most suitable for the study data set. In addition, only significant or theoretically relevant variables based on previous experience are included for analysis. In contrast, machine learning is not built on a pre-structured model; rather, the data shape the model by detecting underlying patterns. The more variables (input) used to train the model, the more accurate the ultimate model will be.[38]
Leo Breimandistinguished two statistical modelling paradigms: data model and algorithmic model,[39]wherein "algorithmic model" means more or less the machine learning algorithms likeRandom Forest.
Some statisticians have adopted methods from machine learning, leading to a combined field that they callstatistical learning.[40]
Analytical and computational techniques derived from deep-rooted physics of disordered systems can be extended to large-scale problems, including machine learning, e.g., to analyse the weight space ofdeep neural networks.[41]Statistical physics is thus finding applications in the area ofmedical diagnostics.[42]
A core objective of a learner is to generalise from its experience.[5][43]Generalisation in this context is the ability of a learning machine to perform accurately on new, unseen examples/tasks after having experienced a learning data set. The training examples come from some generally unknown probability distribution (considered representative of the space of occurrences) and the learner has to build a general model about this space that enables it to produce sufficiently accurate predictions in new cases.
The computational analysis of machine learning algorithms and their performance is a branch oftheoretical computer scienceknown ascomputational learning theoryvia theprobably approximately correct learningmodel. Because training sets are finite and the future is uncertain, learning theory usually does not yield guarantees of the performance of algorithms. Instead, probabilistic bounds on the performance are quite common. Thebias–variance decompositionis one way to quantify generalisationerror.
For the best performance in the context of generalisation, the complexity of the hypothesis should match the complexity of the function underlying the data. If the hypothesis is less complex than the function, then the model has under fitted the data. If the complexity of the model is increased in response, then the training error decreases. But if the hypothesis is too complex, then the model is subject tooverfittingand generalisation will be poorer.[44]
In addition to performance bounds, learning theorists study the time complexity and feasibility of learning. In computational learning theory, a computation is considered feasible if it can be done inpolynomial time. There are two kinds oftime complexityresults: Positive results show that a certain class of functions can be learned in polynomial time. Negative results show that certain classes cannot be learned in polynomial time.
Machine learning approaches are traditionally divided into three broad categories, which correspond to learning paradigms, depending on the nature of the "signal" or "feedback" available to the learning system:
Although each algorithm has advantages and limitations, no single algorithm works for all problems.[45][46][47]
Supervised learning algorithms build a mathematical model of a set of data that contains both the inputs and the desired outputs.[48]The data, known astraining data, consists of a set of training examples. Each training example has one or more inputs and the desired output, also known as a supervisory signal. In the mathematical model, each training example is represented by anarrayor vector, sometimes called afeature vector, and the training data is represented by amatrix. Throughiterative optimisationof anobjective function, supervised learning algorithms learn a function that can be used to predict the output associated with new inputs.[49]An optimal function allows the algorithm to correctly determine the output for inputs that were not a part of the training data. An algorithm that improves the accuracy of its outputs or predictions over time is said to have learned to perform that task.[18]
Types of supervised-learning algorithms includeactive learning,classificationandregression.[50]Classification algorithms are used when the outputs are restricted to a limited set of values, while regression algorithms are used when the outputs can take any numerical value within a range. For example, in a classification algorithm that filters emails, the input is an incoming email, and the output is the folder in which to file the email. In contrast, regression is used for tasks such as predicting a person's height based on factors like age and genetics or forecasting future temperatures based on historical data.[51]
Similarity learningis an area of supervised machine learning closely related to regression and classification, but the goal is to learn from examples using a similarity function that measures how similar or related two objects are. It has applications inranking,recommendation systems, visual identity tracking, face verification, and speaker verification.
Unsupervised learning algorithms find structures in data that has not been labelled, classified or categorised. Instead of responding to feedback, unsupervised learning algorithms identify commonalities in the data and react based on the presence or absence of such commonalities in each new piece of data. Central applications of unsupervised machine learning include clustering,dimensionality reduction,[7]anddensity estimation.[52]
Cluster analysis is the assignment of a set of observations into subsets (calledclusters) so that observations within the same cluster are similar according to one or more predesignated criteria, while observations drawn from different clusters are dissimilar. Different clustering techniques make different assumptions on the structure of the data, often defined by somesimilarity metricand evaluated, for example, byinternal compactness, or the similarity between members of the same cluster, andseparation, the difference between clusters. Other methods are based onestimated densityandgraph connectivity.
A special type of unsupervised learning called,self-supervised learninginvolves training a model by generating the supervisory signal from the data itself.[53][54]
Semi-supervised learning falls betweenunsupervised learning(without any labelled training data) andsupervised learning(with completely labelled training data). Some of the training examples are missing training labels, yet many machine-learning researchers have found that unlabelled data, when used in conjunction with a small amount of labelled data, can produce a considerable improvement in learning accuracy.
Inweakly supervised learning, the training labels are noisy, limited, or imprecise; however, these labels are often cheaper to obtain, resulting in larger effective training sets.[55]
Reinforcement learning is an area of machine learning concerned with howsoftware agentsought to takeactionsin an environment so as to maximise some notion of cumulative reward. Due to its generality, the field is studied in many other disciplines, such asgame theory,control theory,operations research,information theory,simulation-based optimisation,multi-agent systems,swarm intelligence,statisticsandgenetic algorithms. In reinforcement learning, the environment is typically represented as aMarkov decision process(MDP). Many reinforcement learning algorithms usedynamic programmingtechniques.[56]Reinforcement learning algorithms do not assume knowledge of an exact mathematical model of the MDP and are used when exact models are infeasible. Reinforcement learning algorithms are used in autonomous vehicles or in learning to play a game against a human opponent.
Dimensionality reductionis a process of reducing the number of random variables under consideration by obtaining a set of principal variables.[57]In other words, it is a process of reducing the dimension of thefeatureset, also called the "number of features". Most of the dimensionality reduction techniques can be considered as either feature elimination orextraction. One of the popular methods of dimensionality reduction isprincipal component analysis(PCA). PCA involves changing higher-dimensional data (e.g., 3D) to a smaller space (e.g., 2D).
Themanifold hypothesisproposes that high-dimensional data sets lie along low-dimensionalmanifolds, and many dimensionality reduction techniques make this assumption, leading to the area ofmanifold learningandmanifold regularisation.
Other approaches have been developed which do not fit neatly into this three-fold categorisation, and sometimes more than one is used by the same machine learning system. For example,topic modelling,meta-learning.[58]
Self-learning, as a machine learning paradigm was introduced in 1982 along with a neural network capable of self-learning, namedcrossbar adaptive array(CAA).[59][60]It gives a solution to the problem learning without any external reward, by introducing emotion as an internal reward. Emotion is used as state evaluation of a self-learning agent. The CAA self-learning algorithm computes, in a crossbar fashion, both decisions about actions and emotions (feelings) about consequence situations. The system is driven by the interaction between cognition and emotion.[61]The self-learning algorithm updates a memory matrix W =||w(a,s)|| such that in each iteration executes the following machine learning routine:
It is a system with only one input, situation, and only one output, action (or behaviour) a. There is neither a separate reinforcement input nor an advice input from the environment. The backpropagated value (secondary reinforcement) is the emotion toward the consequence situation. The CAA exists in two environments, one is the behavioural environment where it behaves, and the other is the genetic environment, wherefrom it initially and only once receives initial emotions about situations to be encountered in the behavioural environment. After receiving the genome (species) vector from the genetic environment, the CAA learns a goal-seeking behaviour, in an environment that contains both desirable and undesirable situations.[62]
Several learning algorithms aim at discovering better representations of the inputs provided during training.[63]Classic examples includeprincipal component analysisand cluster analysis. Feature learning algorithms, also called representation learning algorithms, often attempt to preserve the information in their input but also transform it in a way that makes it useful, often as a pre-processing step before performing classification or predictions. This technique allows reconstruction of the inputs coming from the unknown data-generating distribution, while not being necessarily faithful to configurations that are implausible under that distribution. This replaces manualfeature engineering, and allows a machine to both learn the features and use them to perform a specific task.
Feature learning can be either supervised or unsupervised. In supervised feature learning, features are learned using labelled input data. Examples includeartificial neural networks,multilayer perceptrons, and superviseddictionary learning. In unsupervised feature learning, features are learned with unlabelled input data. Examples include dictionary learning,independent component analysis,autoencoders,matrix factorisation[64]and various forms ofclustering.[65][66][67]
Manifold learningalgorithms attempt to do so under the constraint that the learned representation is low-dimensional.Sparse codingalgorithms attempt to do so under the constraint that the learned representation is sparse, meaning that the mathematical model has many zeros.Multilinear subspace learningalgorithms aim to learn low-dimensional representations directly fromtensorrepresentations for multidimensional data, without reshaping them into higher-dimensional vectors.[68]Deep learningalgorithms discover multiple levels of representation, or a hierarchy of features, with higher-level, more abstract features defined in terms of (or generating) lower-level features. It has been argued that an intelligent machine is one that learns a representation that disentangles the underlying factors of variation that explain the observed data.[69]
Feature learning is motivated by the fact that machine learning tasks such as classification often require input that is mathematically and computationally convenient to process. However, real-world data such as images, video, and sensory data has not yielded attempts to algorithmically define specific features. An alternative is to discover such features or representations through examination, without relying on explicit algorithms.
Sparse dictionary learning is a feature learning method where a training example is represented as a linear combination ofbasis functionsand assumed to be asparse matrix. The method isstrongly NP-hardand difficult to solve approximately.[70]A popularheuristicmethod for sparse dictionary learning is thek-SVDalgorithm. Sparse dictionary learning has been applied in several contexts. In classification, the problem is to determine the class to which a previously unseen training example belongs. For a dictionary where each class has already been built, a new training example is associated with the class that is best sparsely represented by the corresponding dictionary. Sparse dictionary learning has also been applied inimage de-noising. The key idea is that a clean image patch can be sparsely represented by an image dictionary, but the noise cannot.[71]
Indata mining, anomaly detection, also known as outlier detection, is the identification of rare items, events or observations which raise suspicions by differing significantly from the majority of the data.[72]Typically, the anomalous items represent an issue such asbank fraud, a structural defect, medical problems or errors in a text. Anomalies are referred to asoutliers, novelties, noise, deviations and exceptions.[73]
In particular, in the context of abuse and network intrusion detection, the interesting objects are often not rare objects, but unexpected bursts of inactivity. This pattern does not adhere to the common statistical definition of an outlier as a rare object. Many outlier detection methods (in particular, unsupervised algorithms) will fail on such data unless aggregated appropriately. Instead, a cluster analysis algorithm may be able to detect the micro-clusters formed by these patterns.[74]
Three broad categories of anomaly detection techniques exist.[75]Unsupervised anomaly detection techniques detect anomalies in an unlabelled test data set under the assumption that the majority of the instances in the data set are normal, by looking for instances that seem to fit the least to the remainder of the data set. Supervised anomaly detection techniques require a data set that has been labelled as "normal" and "abnormal" and involves training a classifier (the key difference from many other statistical classification problems is the inherently unbalanced nature of outlier detection). Semi-supervised anomaly detection techniques construct a model representing normal behaviour from a given normal training data set and then test the likelihood of a test instance to be generated by the model.
Robot learningis inspired by a multitude of machine learning methods, starting from supervised learning, reinforcement learning,[76][77]and finallymeta-learning(e.g. MAML).
Association rule learning is arule-based machine learningmethod for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of "interestingness".[78]
Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves "rules" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilisation of a set of relational rules that collectively represent the knowledge captured by the system. This is in contrast to other machine learning algorithms that commonly identify a singular model that can be universally applied to any instance in order to make a prediction.[79]Rule-based machine learning approaches includelearning classifier systems, association rule learning, andartificial immune systems.
Based on the concept of strong rules,Rakesh Agrawal,Tomasz Imielińskiand Arun Swami introduced association rules for discovering regularities between products in large-scale transaction data recorded bypoint-of-sale(POS) systems in supermarkets.[80]For example, the rule{onions,potatoes}⇒{burger}{\displaystyle \{\mathrm {onions,potatoes} \}\Rightarrow \{\mathrm {burger} \}}found in the sales data of a supermarket would indicate that if a customer buys onions and potatoes together, they are likely to also buy hamburger meat. Such information can be used as the basis for decisions about marketing activities such as promotionalpricingorproduct placements. In addition tomarket basket analysis, association rules are employed today in application areas includingWeb usage mining,intrusion detection,continuous production, andbioinformatics. In contrast withsequence mining, association rule learning typically does not consider the order of items either within a transaction or across transactions.
Learning classifier systems(LCS) are a family of rule-based machine learning algorithms that combine a discovery component, typically agenetic algorithm, with a learning component, performing eithersupervised learning,reinforcement learning, orunsupervised learning. They seek to identify a set of context-dependent rules that collectively store and apply knowledge in apiecewisemanner in order to make predictions.[81]
Inductive logic programming(ILP) is an approach to rule learning usinglogic programmingas a uniform representation for input examples, background knowledge, and hypotheses. Given an encoding of the known background knowledge and a set of examples represented as a logical database of facts, an ILP system will derive a hypothesized logic program thatentailsall positive and no negative examples.Inductive programmingis a related field that considers any kind of programming language for representing hypotheses (and not only logic programming), such asfunctional programs.
Inductive logic programming is particularly useful inbioinformaticsandnatural language processing.Gordon PlotkinandEhud Shapirolaid the initial theoretical foundation for inductive machine learning in a logical setting.[82][83][84]Shapiro built their first implementation (Model Inference System) in 1981: a Prolog program that inductively inferred logic programs from positive and negative examples.[85]The terminductivehere refers tophilosophicalinduction, suggesting a theory to explain observed facts, rather thanmathematical induction, proving a property for all members of a well-ordered set.
Amachine learning modelis a type ofmathematical modelthat, once "trained" on a given dataset, can be used to make predictions or classifications on new data. During training, a learning algorithm iteratively adjusts the model's internal parameters to minimise errors in its predictions.[86]By extension, the term "model" can refer to several levels of specificity, from a general class of models and their associated learning algorithms to a fully trained model with all its internal parameters tuned.[87]
Various types of models have been used and researched for machine learning systems, picking the best model for a task is calledmodel selection.
Artificial neural networks (ANNs), orconnectionistsystems, are computing systems vaguely inspired by thebiological neural networksthat constitute animalbrains. Such systems "learn" to perform tasks by considering examples, generally without being programmed with any task-specific rules.
An ANN is a model based on a collection of connected units or nodes called "artificial neurons", which loosely model theneuronsin a biological brain. Each connection, like thesynapsesin a biological brain, can transmit information, a "signal", from one artificial neuron to another. An artificial neuron that receives a signal can process it and then signal additional artificial neurons connected to it. In common ANN implementations, the signal at a connection between artificial neurons is areal number, and the output of each artificial neuron is computed by some non-linear function of the sum of its inputs. The connections between artificial neurons are called "edges". Artificial neurons and edges typically have aweightthat adjusts as learning proceeds. The weight increases or decreases the strength of the signal at a connection. Artificial neurons may have a threshold such that the signal is only sent if the aggregate signal crosses that threshold. Typically, artificial neurons are aggregated into layers. Different layers may perform different kinds of transformations on their inputs. Signals travel from the first layer (the input layer) to the last layer (the output layer), possibly after traversing the layers multiple times.
The original goal of the ANN approach was to solve problems in the same way that ahuman brainwould. However, over time, attention moved to performing specific tasks, leading to deviations frombiology. Artificial neural networks have been used on a variety of tasks, includingcomputer vision,speech recognition,machine translation,social networkfiltering,playing board and video gamesandmedical diagnosis.
Deep learningconsists of multiple hidden layers in an artificial neural network. This approach tries to model the way the human brain processes light and sound into vision and hearing. Some successful applications of deep learning are computer vision and speech recognition.[88]
Decision tree learning uses adecision treeas apredictive modelto go from observations about an item (represented in the branches) to conclusions about the item's target value (represented in the leaves). It is one of the predictive modelling approaches used in statistics, data mining, and machine learning. Tree models where the target variable can take a discrete set of values are called classification trees; in these tree structures,leavesrepresent class labels, and branches representconjunctionsof features that lead to those class labels. Decision trees where the target variable can take continuous values (typicallyreal numbers) are called regression trees. In decision analysis, a decision tree can be used to visually and explicitly represent decisions anddecision making. In data mining, a decision tree describes data, but the resulting classification tree can be an input for decision-making.
Random forest regression (RFR) falls under umbrella of decisiontree-based models. RFR is an ensemble learning method that builds multiple decision trees and averages their predictions to improve accuracy and to avoid overfitting. To build decision trees, RFR uses bootstrapped sampling, for instance each decision tree is trained on random data of from training set. This random selection of RFR for training enables model to reduce bias predictions and achieve accuracy. RFR generates independent decision trees, and it can work on single output data as well multiple regressor task. This makes RFR compatible to be used in various application.[89][90]
Support-vector machines (SVMs), also known as support-vector networks, are a set of relatedsupervised learningmethods used for classification and regression. Given a set of training examples, each marked as belonging to one of two categories, an SVM training algorithm builds a model that predicts whether a new example falls into one category.[91]An SVM training algorithm is a non-probabilistic,binary,linear classifier, although methods such asPlatt scalingexist to use SVM in a probabilistic classification setting. In addition to performing linear classification, SVMs can efficiently perform a non-linear classification using what is called thekernel trick, implicitly mapping their inputs into high-dimensional feature spaces.
Regression analysis encompasses a large variety of statistical methods to estimate the relationship between input variables and their associated features. Its most common form islinear regression, where a single line is drawn to best fit the given data according to a mathematical criterion such asordinary least squares. The latter is often extended byregularisationmethods to mitigate overfitting and bias, as inridge regression. When dealing with non-linear problems, go-to models includepolynomial regression(for example, used for trendline fitting in Microsoft Excel[92]),logistic regression(often used instatistical classification) or evenkernel regression, which introduces non-linearity by taking advantage of thekernel trickto implicitly map input variables to higher-dimensional space.
Multivariate linear regressionextends the concept of linear regression to handle multiple dependent variables simultaneously. This approach estimates the relationships between a set of input variables and several output variables by fitting amultidimensionallinear model. It is particularly useful in scenarios where outputs are interdependent or share underlying patterns, such as predicting multiple economic indicators or reconstructing images,[93]which are inherently multi-dimensional.
A Bayesian network, belief network, or directed acyclic graphical model is a probabilisticgraphical modelthat represents a set ofrandom variablesand theirconditional independencewith adirected acyclic graph(DAG). For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases. Efficient algorithms exist that performinferenceand learning. Bayesian networks that model sequences of variables, likespeech signalsorprotein sequences, are calleddynamic Bayesian networks. Generalisations of Bayesian networks that can represent and solve decision problems under uncertainty are calledinfluence diagrams.
A Gaussian process is astochastic processin which every finite collection of the random variables in the process has amultivariate normal distribution, and it relies on a pre-definedcovariance function, or kernel, that models how pairs of points relate to each other depending on their locations.
Given a set of observed points, or input–output examples, the distribution of the (unobserved) output of a new point as function of its input data can be directly computed by looking like the observed points and the covariances between those points and the new, unobserved point.
Gaussian processes are popular surrogate models inBayesian optimisationused to dohyperparameter optimisation.
A genetic algorithm (GA) is asearch algorithmandheuristictechnique that mimics the process ofnatural selection, using methods such asmutationandcrossoverto generate newgenotypesin the hope of finding good solutions to a given problem. In machine learning, genetic algorithms were used in the 1980s and 1990s.[95][96]Conversely, machine learning techniques have been used to improve the performance of genetic andevolutionary algorithms.[97]
The theory of belief functions, also referred to as evidence theory or Dempster–Shafer theory, is a general framework for reasoning with uncertainty, with understood connections to other frameworks such asprobability,possibilityandimprecise probability theories. These theoretical frameworks can be thought of as a kind of learner and have some analogous properties of how evidence is combined (e.g., Dempster's rule of combination), just like how in apmf-based Bayesian approach would combine probabilities.[98]However, there are many caveats to these beliefs functions when compared to Bayesian approaches in order to incorporate ignorance anduncertainty quantification. These belief function approaches that are implemented within the machine learning domain typically leverage a fusion approach of variousensemble methodsto better handle the learner'sdecision boundary, low samples, and ambiguous class issues that standard machine learning approach tend to have difficulty resolving.[4][9]However, the computational complexity of these algorithms are dependent on the number of propositions (classes), and can lead to a much higher computation time when compared to other machine learning approaches.
Rule-based machine learning (RBML) is a branch of machine learning that automatically discovers and learns 'rules' from data. It provides interpretable models, making it useful for decision-making in fields like healthcare, fraud detection, and cybersecurity. Key RBML techniques includeslearning classifier systems,[99]association rule learning,[100]artificial immune systems,[101]and other similar models. These methods extract patterns from data and evolve rules over time.
Typically, machine learning models require a high quantity of reliable data to perform accurate predictions. When training a machine learning model, machine learning engineers need to target and collect a large and representativesampleof data. Data from the training set can be as varied as acorpus of text, a collection of images,sensordata, and data collected from individual users of a service.Overfittingis something to watch out for when training a machine learning model. Trained models derived from biased or non-evaluated data can result in skewed or undesired predictions. Biased models may result in detrimental outcomes, thereby furthering the negative impacts on society or objectives.Algorithmic biasis a potential result of data not being fully prepared for training. Machine learning ethics is becoming a field of study and notably, becoming integrated within machine learning engineering teams.
Federated learning is an adapted form ofdistributed artificial intelligenceto training machine learning models that decentralises the training process, allowing for users' privacy to be maintained by not needing to send their data to a centralised server. This also increases efficiency by decentralising the training process to many devices. For example,Gboarduses federated machine learning to train search query prediction models on users' mobile phones without having to send individual searches back toGoogle.[102]
There are many applications for machine learning, including:
In 2006, the media-services providerNetflixheld the first "Netflix Prize" competition to find a program to better predict user preferences and improve the accuracy of its existing Cinematch movie recommendation algorithm by at least 10%. A joint team made up of researchers fromAT&T Labs-Research in collaboration with the teams Big Chaos and Pragmatic Theory built anensemble modelto win the Grand Prize in 2009 for $1 million.[105]Shortly after the prize was awarded, Netflix realised that viewers' ratings were not the best indicators of their viewing patterns ("everything is a recommendation") and they changed their recommendation engine accordingly.[106]In 2010 The Wall Street Journal wrote about the firm Rebellion Research and their use of machine learning to predict the financial crisis.[107]In 2012, co-founder ofSun Microsystems,Vinod Khosla, predicted that 80% of medical doctors jobs would be lost in the next two decades to automated machine learning medical diagnostic software.[108]In 2014, it was reported that a machine learning algorithm had been applied in the field of art history to study fine art paintings and that it may have revealed previously unrecognised influences among artists.[109]In 2019Springer Naturepublished the first research book created using machine learning.[110]In 2020, machine learning technology was used to help make diagnoses and aid researchers in developing a cure for COVID-19.[111]Machine learning was recently applied to predict the pro-environmental behaviour of travellers.[112]Recently, machine learning technology was also applied to optimise smartphone's performance and thermal behaviour based on the user's interaction with the phone.[113][114][115]When applied correctly, machine learning algorithms (MLAs) can utilise a wide range of company characteristics to predict stock returns withoutoverfitting. By employing effective feature engineering and combining forecasts, MLAs can generate results that far surpass those obtained from basic linear techniques likeOLS.[116]
Recent advancements in machine learning have extended into the field of quantum chemistry, where novel algorithms now enable the prediction of solvent effects on chemical reactions, thereby offering new tools for chemists to tailor experimental conditions for optimal outcomes.[117]
Machine Learning is becoming a useful tool to investigate and predict evacuation decision making in large scale and small scale disasters. Different solutions have been tested to predict if and when householders decide to evacuate during wildfires and hurricanes.[118][119][120]Other applications have been focusing on pre evacuation decisions in building fires.[121][122]
Machine learning is also emerging as a promising tool in geotechnical engineering, where it is used to support tasks such as ground classification, hazard prediction, and site characterization. Recent research emphasizes a move toward data-centric methods in this field, where machine learning is not a replacement for engineering judgment, but a way to enhance it using site-specific data and patterns.[123]
Although machine learning has been transformative in some fields, machine-learning programs often fail to deliver expected results.[124][125][126]Reasons for this are numerous: lack of (suitable) data, lack of access to the data, data bias, privacy problems, badly chosen tasks and algorithms, wrong tools and people, lack of resources, and evaluation problems.[127]
The "black box theory" poses another yet significant challenge. Black box refers to a situation where the algorithm or the process of producing an output is entirely opaque, meaning that even the coders of the algorithm cannot audit the pattern that the machine extracted out of the data.[128]The House of Lords Select Committee, which claimed that such an "intelligence system" that could have a "substantial impact on an individual's life" would not be considered acceptable unless it provided "a full and satisfactory explanation for the decisions" it makes.[128]
In 2018, a self-driving car fromUberfailed to detect a pedestrian, who was killed after a collision.[129]Attempts to use machine learning in healthcare with theIBM Watsonsystem failed to deliver even after years of time and billions of dollars invested.[130][131]Microsoft'sBing Chatchatbot has been reported to produce hostile and offensive response against its users.[132]
Machine learning has been used as a strategy to update the evidence related to a systematic review and increased reviewer burden related to the growth of biomedical literature. While it has improved with training sets, it has not yet developed sufficiently to reduce the workload burden without limiting the necessary sensitivity for the findings research themselves.[133]
Explainable AI (XAI), or Interpretable AI, or Explainable Machine Learning (XML), is artificial intelligence (AI) in which humans can understand the decisions or predictions made by the AI.[134]It contrasts with the "black box" concept in machine learning where even its designers cannot explain why an AI arrived at a specific decision.[135]By refining the mental models of users of AI-powered systems and dismantling their misconceptions, XAI promises to help users perform more effectively. XAI may be an implementation of the social right to explanation.
Settling on a bad, overly complex theory gerrymandered to fit all the past training data is known as overfitting. Many systems attempt to reduce overfitting by rewarding a theory in accordance with how well it fits the data but penalising the theory in accordance with how complex the theory is.[136]
Learners can also disappoint by "learning the wrong lesson". A toy example is that an image classifier trained only on pictures of brown horses and black cats might conclude that all brown patches are likely to be horses.[137]A real-world example is that, unlike humans, current image classifiers often do not primarily make judgements from the spatial relationship between components of the picture, and they learn relationships between pixels that humans are oblivious to, but that still correlate with images of certain types of real objects. Modifying these patterns on a legitimate image can result in "adversarial" images that the system misclassifies.[138][139]
Adversarial vulnerabilities can also result in nonlinear systems, or from non-pattern perturbations. For some systems, it is possible to change the output by only changing a single adversarially chosen pixel.[140]Machine learning models are often vulnerable to manipulation or evasion viaadversarial machine learning.[141]
Researchers have demonstrated howbackdoorscan be placed undetectably into classifying (e.g., for categories "spam" and well-visible "not spam" of posts) machine learning models that are often developed or trained by third parties. Parties can change the classification of any input, including in cases for which a type ofdata/software transparencyis provided, possibly includingwhite-box access.[142][143][144]
Classification of machine learning models can be validated by accuracy estimation techniques like theholdoutmethod, which splits the data in a training and test set (conventionally 2/3 training set and 1/3 test set designation) and evaluates the performance of the training model on the test set. In comparison, the K-fold-cross-validationmethod randomly partitions the data into K subsets and then K experiments are performed each respectively considering 1 subset for evaluation and the remaining K-1 subsets for training the model. In addition to the holdout and cross-validation methods,bootstrap, which samples n instances with replacement from the dataset, can be used to assess model accuracy.[145]
In addition to overall accuracy, investigators frequently reportsensitivity and specificitymeaning true positive rate (TPR) and true negative rate (TNR) respectively. Similarly, investigators sometimes report thefalse positive rate(FPR) as well as thefalse negative rate(FNR). However, these rates are ratios that fail to reveal their numerators and denominators.Receiver operating characteristic(ROC) along with the accompanying Area Under the ROC Curve (AUC) offer additional tools for classification model assessment. Higher AUC is associated with a better performing model.[146]
Theethicsofartificial intelligencecovers a broad range of topics within AI that are considered to have particular ethical stakes.[147]This includesalgorithmic biases,fairness,[148]automated decision-making,[149]accountability,privacy, andregulation. It also covers various emerging or potential future challenges such asmachine ethics(how to make machines that behave ethically),lethal autonomous weapon systems,arms racedynamics,AI safetyandalignment,technological unemployment, AI-enabledmisinformation, how to treat certain AI systems if they have amoral status(AI welfare and rights),artificial superintelligenceandexistential risks.[147]
Different machine learning approaches can suffer from different data biases. A machine learning system trained specifically on current customers may not be able to predict the needs of new customer groups that are not represented in the training data. When trained on human-made data, machine learning is likely to pick up the constitutional and unconscious biases already present in society.[150]
Systems that are trained on datasets collected with biases may exhibit these biases upon use (algorithmic bias), thus digitising cultural prejudices.[151]For example, in 1988, the UK'sCommission for Racial Equalityfound thatSt. George's Medical Schoolhad been using a computer program trained from data of previous admissions staff and that this program had denied nearly 60 candidates who were found to either be women or have non-European sounding names.[150]Using job hiring data from a firm with racist hiring policies may lead to a machine learning system duplicating the bias by scoring job applicants by similarity to previous successful applicants.[152][153]Another example includes predictive policing companyGeolitica's predictive algorithm that resulted in "disproportionately high levels of over-policing in low-income and minority communities" after being trained with historical crime data.[154]
While responsiblecollection of dataand documentation of algorithmic rules used by a system is considered a critical part of machine learning, some researchers blame lack of participation and representation of minority population in the field of AI for machine learning's vulnerability to biases.[155]In fact, according to research carried out by the Computing Research Association (CRA) in 2021, "female faculty merely make up 16.1%" of all faculty members who focus on AI among several universities around the world.[156]Furthermore, among the group of "new U.S. resident AI PhD graduates," 45% identified as white, 22.4% as Asian, 3.2% as Hispanic, and 2.4% as African American, which further demonstrates a lack of diversity in the field of AI.[156]
Language models learned from data have been shown to contain human-like biases.[157][158]Because human languages contain biases, machines trained on languagecorporawill necessarily also learn these biases.[159][160]In 2016, Microsoft testedTay, achatbotthat learned from Twitter, and it quickly picked up racist and sexist language.[161]
In an experiment carried out byProPublica, aninvestigative journalismorganisation, a machine learning algorithm's insight into the recidivism rates among prisoners falsely flagged "black defendants high risk twice as often as white defendants".[154]In 2015, Google Photos once tagged a couple of black people as gorillas, which caused controversy. The gorilla label was subsequently removed, and in 2023, it still cannot recognise gorillas.[162]Similar issues with recognising non-white people have been found in many other systems.[163]
Because of such challenges, the effective use of machine learning may take longer to be adopted in other domains.[164]Concern forfairnessin machine learning, that is, reducing bias in machine learning and propelling its use for human good, is increasingly expressed by artificial intelligence scientists, includingFei-Fei Li, who said that "[t]here's nothing artificial about AI. It's inspired by people, it's created by people, and—most importantly—it impacts people. It is a powerful tool we are only just beginning to understand, and that is a profound responsibility."[165]
There are concerns among health care professionals that these systems might not be designed in the public's interest but as income-generating machines. This is especially true in the United States where there is a long-standing ethical dilemma of improving health care, but also increasing profits. For example, the algorithms could be designed to provide patients with unnecessary tests or medication in which the algorithm's proprietary owners hold stakes. There is potential for machine learning in health care to provide professionals an additional tool to diagnose, medicate, and plan recovery paths for patients, but this requires these biases to be mitigated.[166]
Since the 2010s, advances in both machine learning algorithms and computer hardware have led to more efficient methods for trainingdeep neural networks(a particular narrow subdomain of machine learning) that contain many layers of nonlinear hidden units.[167]By 2019, graphics processing units (GPUs), often with AI-specific enhancements, had displaced CPUs as the dominant method of training large-scale commercial cloud AI.[168]OpenAIestimated the hardware compute used in the largest deep learning projects fromAlexNet(2012) toAlphaZero(2017), and found a 300,000-fold increase in the amount of compute required, with a doubling-time trendline of 3.4 months.[169][170]
Tensor Processing Units (TPUs)are specialised hardware accelerators developed byGooglespecifically for machine learning workloads. Unlike general-purposeGPUsandFPGAs, TPUs are optimised for tensor computations, making them particularly efficient for deep learning tasks such as training and inference. They are widely used in Google Cloud AI services and large-scale machine learning models like Google's DeepMind AlphaFold and large language models. TPUs leverage matrix multiplication units and high-bandwidth memory to accelerate computations while maintaining energy efficiency.[171]Since their introduction in 2016, TPUs have become a key component of AI infrastructure, especially in cloud-based environments.
Neuromorphic computingrefers to a class of computing systems designed to emulate the structure and functionality of biological neural networks. These systems may be implemented through software-based simulations on conventional hardware or through specialised hardware architectures.[172]
Aphysical neural networkis a specific type of neuromorphic hardware that relies on electrically adjustable materials, such as memristors, to emulate the function ofneural synapses. The term "physical neural network" highlights the use of physical hardware for computation, as opposed to software-based implementations. It broadly refers to artificial neural networks that use materials with adjustable resistance to replicate neural synapses.[173][174]
Embedded machine learning is a sub-field of machine learning where models are deployed onembedded systemswith limited computing resources, such aswearable computers,edge devicesandmicrocontrollers.[175][176][177][178]Running models directly on these devices eliminates the need to transfer and store data on cloud servers for further processing, thereby reducing the risk of data breaches, privacy leaks and theft of intellectual property, personal data and business secrets. Embedded machine learning can be achieved through various techniques, such ashardware acceleration,[179][180]approximate computing,[181]and model optimisation.[182][183]Common optimisation techniques includepruning,quantisation,knowledge distillation, low-rank factorisation, network architecture search, and parameter sharing.
Software suitescontaining a variety of machine learning algorithms include the following:
|
https://en.wikipedia.org/wiki/Machine_Learning
|
Incomputer science,binary space partitioning(BSP) is a method forspace partitioningwhichrecursivelysubdivides aEuclidean spaceinto twoconvex setsby usinghyperplanesas partitions. This process of subdividing gives rise to a representation of objects within the space in the form of atree data structureknown as aBSP tree.
Binary space partitioning was developed in the context of3D computer graphicsin 1969.[1][2]The structure of a BSP tree is useful inrenderingbecause it can efficiently give spatial information about the objects in a scene, such as objects being ordered from front-to-back with respect to a viewer at a given location. Other applications of BSP include: performinggeometricaloperations withshapes(constructive solid geometry) inCAD,[3]collision detectioninroboticsand 3D video games,ray tracing, virtual landscape simulation,[4]and other applications that involve the handling of complex spatial scenes.
Binary space partitioning is a generic process ofrecursivelydividing a scene into two until the partitioning satisfies one or more requirements. It can be seen as a generalization of other spatial tree structures such ask-d treesandquadtrees, one where hyperplanes that partition the space may have any orientation, rather than being aligned with the coordinate axes as they are ink-d trees or quadtrees. When used in computer graphics to render scenes composed of planarpolygons, the partitioning planes are frequently chosen to coincide with the planes defined by polygons in the scene.
The specific choice of partitioning plane and criterion for terminating the partitioning process varies depending on the purpose of the BSP tree. For example, in computer graphics rendering, the scene is divided until each node of the BSP tree contains only polygons that can be rendered in arbitrary order. Whenback-face cullingis used, each node, therefore, contains a convex set of polygons, whereas when rendering double-sided polygons, each node of the BSP tree contains only polygons in a single plane. In collision detection or ray tracing, a scene may be divided up intoprimitiveson which collision or ray intersection tests are straightforward.
Binary space partitioning arose from computer graphics needing to rapidly draw three-dimensional scenes composed of polygons. A simple way to draw such scenes is thepainter's algorithm, which produces polygons in order of distance from the viewer, back to front, painting over the background and previous polygons with each closer object. This approach has two disadvantages: the time required to sort polygons in back-to-front order, and the possibility of errors in overlapping polygons. Fuchs and co-authors[2]showed that constructing a BSP tree solved both of these problems by providing a rapid method of sorting polygons with respect to a given viewpoint (linear in the number of polygons in the scene) and by subdividing overlapping polygons to avoid errors that can occur with the painter's algorithm. A disadvantage of binary space partitioning is that generating a BSP tree can be time-consuming. Typically, it is therefore performed once on static geometry, as a pre-calculation step, prior to rendering or other real-time operations on a scene. The expense of constructing a BSP tree makes it difficult and inefficient to directly implement moving objects into a tree.
The canonical use of a BSP tree is for rendering polygons (that are double-sided, that is, withoutback-face culling) with the painter's algorithm. Each polygon is designated with a front side and a backside which could be chosen arbitrarily and only affects the structure of the tree but not the required result.[2]Such a tree is constructed from an unsorted list of all the polygons in a scene. The recursive algorithm for construction of a BSP tree from that list of polygons is:[2]
The following diagram illustrates the use of this algorithm in converting a list of lines or polygons into a BSP tree. At each of the eight steps (i.-viii.), the algorithm above is applied to a list of lines, and one new node is added to the tree.
The final number of polygons or lines in a tree is often larger (sometimes much larger[2]) than the original list, since lines or polygons that cross the partitioning plane must be split into two. It is desirable to minimize this increase, but also to maintain reasonablebalancein the final tree. The choice of which polygon or line is used as a partitioning plane (in step 1 of the algorithm) is therefore important in creating an efficient BSP tree.
A BSP tree istraversedin a linear time, in an order determined by the particular function of the tree. Again using the example of rendering double-sided polygons using the painter's algorithm, to draw a polygonPcorrectly requires that all polygons behind the planePlies in must be drawn first, then polygonP, then finally the polygons in front ofP. If this drawing order is satisfied for all polygons in a scene, then the entire scene renders in the correct order. This procedure can be implemented by recursively traversing a BSP tree using the following algorithm.[2]From a given viewing locationV, to render a BSP tree,
Applying this algorithm recursively to the BSP tree generated above results in the following steps:
The tree is traversed in linear time and renders the polygons in a far-to-near ordering (D1,B1,C1,A,D2,B2,C2,D3) suitable for the painter's algorithm.
BSP trees are often used by 3Dvideo games, particularlyfirst-person shootersand those with indoor environments.Game enginesusing BSP trees include theDoom (id Tech 1),Quake (id Tech 2 variant),GoldSrcandSourceengines. In them, BSP trees containing the static geometry of a scene are often used together with aZ-buffer, to correctly merge movable objects such as doors and characters onto the background scene. While binary space partitioning provides a convenient way to store and retrieve spatial information about polygons in a scene, it does not solve the problem ofvisible surface determination.
BSP trees have also been applied to image compression.[6]
|
https://en.wikipedia.org/wiki/Binary_space_partitioning
|
Abounding volume hierarchy(BVH) is atree structureon a set ofgeometricobjects. All geometric objects, which form the leaf nodes of the tree, are wrapped inbounding volumes. These nodes are then grouped as small sets and enclosed within larger bounding volumes. These, in turn, are also grouped and enclosed within other larger bounding volumes in a recursive fashion, eventually resulting in a tree structure with a single bounding volume at the top of the tree. Bounding volume hierarchies are used to support several operations on sets of geometric objects efficiently, such as incollision detectionandray tracing.
Although wrapping objects in bounding volumes and performing collision tests on them before testing the object geometry itself simplifies the tests and can result in significant performance improvements, the same number of pairwise tests between bounding volumes are still being performed. By arranging the bounding volumes into a bounding volume hierarchy, thetime complexity(the number of tests performed) can be reduced to logarithmic in the number of objects. With such a hierarchy in place, during collision testing, children volumes do not have to be examined if their parent volumes are not intersected (for example, if the bounding volumes of two bumper cars do not intersect, the bounding volumes of the bumpers themselves would not have to be checked for collision).
The choice of bounding volume is determined by a trade-off between two objectives. On the one hand, bounding volumes that have a very simple shape need only a few bytes to store them, andintersection testsand distance computations are simple and fast. On the other hand, bounding volumes should fit the corresponding data objects very tightly. One of the most commonly used bounding volumes is anaxis-aligned minimum bounding box. The axis-aligned minimum bounding box for a given set of data objects is easy to compute, needs only few bytes of storage, and robust intersection tests are easy to implement and extremely fast.
There are several desired properties for a BVH that should be taken into consideration when designing one for a specific application:[1]
In terms of the structure of BVH, it has to be decided what degree (the number of children) and height to use in the tree representing the BVH. A tree of a low degree will be of greater height. That increases root-to-leaf traversal time. On the other hand, less work has to be expended at each visited node to check its children for overlap. The opposite holds for a high-degree tree: although the tree will be of smaller height, more work is spent at each node. In practice, binary trees (degree = 2) are by far the most common. One of the main reasons is that binary trees are easier to build.[2]
There are three primary categories of tree construction methods: top-down, bottom-up, and insertion methods.
Top-down methodsproceed by partitioning the input set into two (or more) subsets, bounding them in the chosen bounding volume, then continuing to partition (and bound) recursively until each subset consists of only a single object represented by a leaf node. Top-down methods are easy to implement, fast to construct and by far the most popular, but do not result in the best possible trees in general.
The most crucial part for this approach is to decide how to partition objects in a node's region among the node's children. This can be done with a splitting plane to split a node's bounding volume into partitions so that having to traverse many child nodes is minimized. Simply using the midpoint of volumecentroidsfor splitting might be a sub-optimal choice, as illustrated in the figure, where a big overlap volume occurs. Hence, good splitting criteria such as thesurface-area heuristic(SAH) are often used with equal-size buckets of splitting planes, so that only at these splitting points, re-calculation of SAH is required.
Bottom-up methodsstart with the input set as the leaves of the tree and then group two (or more) of them to form a new (internal) node, proceed in the same manner until everything has been grouped under a single node (the root of the tree). Bottom-up methods are more difficult to implement, but likely to produce better trees in general. One study[3]indicates that in low-dimensional space, the construction speed can be largely improved (to match or outperform top-down approaches) by sorting objects using aspace-filling curveand applying approximate clustering based on this sequential order.
One example for this is the use of aZ-order curve(also known as Morton-order), where clusters can be found by simply taking a linear pass through a Morton-ordered array of leaves. Given the independent clusters of leaf nodes, sub-trees can be constructed in parallel and then further combined to form higher nodes. This parallelization makes the BVH construction very fast and can also be implemented in a hybrid manner, where all sub-trees are combined followed by a top-down approach.
Both top-down and bottom-up methods are consideredoff-line methodsas they both require all objects to be available before construction starts.Insertion methodsbuild the tree by inserting one object at a time, starting from an empty tree. The insertion location should be chosen that causes the tree to grow as little as possible according to a cost metric. Insertion methods are consideredon-line methodssince they do not require all objects to be available before construction starts and thus allow updates to be performed at runtime.
After a BVH tree is constructed, it can then be converted to acompacted formfor efficient traversal to improve overall system performance. The compact representation is often a linear array in memory, where the nodes of the BVH tree are stored indepth-first order. Hence, for a binary BVH tree, the first child of an internal node will be placed next to itself, and only the offset from the interior node to the index of the second child must be stored explicitly.
The implementation of the traversal can be done without recursive function calls and only astackis required to store the nodes to be visited next. The pseudocode provided below is a simple reference forray tracingapplications.
BVHs are often used inray tracingto eliminate potential intersection candidates within a scene by omitting geometric objects located in bounding volumes which are not intersected by the current ray.[5]Additionally, as a common performance optimization, when only the closest intersection of the ray is of interest, while the ray tracing traversal algorithm is descending nodes, and multiple child nodes intersect the ray, the traversal algorithm will consider the closer volume first, and if it finds an intersection there, which is definitively closer than any possible intersection in a second (or other) volume (i.e., volumes are non-overlapping), it can safely ignore the second volume. This only requires a small change in line 11-12 of the above pseudo-code. Similar optimizations during BVH traversal can be employed when descending into child volumes of the second volume, to restrict further search space and thus reduce traversal time.
Additionally, many specialized methods were developed for BVHs, especially ones based onAABB(axis-aligned bounding boxes), such as parallel building,SIMDaccelerated traversal, good split heuristics (SAH -surface-area heuristicis often used in ray tracing), wide trees (4-ary and 16-ary trees provide some performance benefits, both in build and query performance for practical scenes), and quick structure update (in real time applications objects might be moving or deforming spatially relatively slowly or be still, and same BVH can be updated to be still valid without doing a full rebuild from scratch).
BVHs also naturally support inserting and removing objects without full rebuild, but with resulting BVH having usually worse query performance compared to full rebuild. To solve these problems (as well as quick structure update being sub-optimal), the new BVH could be built asynchronously in parallel or synchronously, after sufficient change is detected (leaf overlap is big, number of insertions and removals crossed the threshold, and other more refined heuristics).
BVHs can also be combined withscene graphmethods, andgeometry instancing, to reduce memory usage, improve structure update and full rebuild performance, as well as guide better object or primitive splitting.
BVHs are often used for acceleratingcollision detectioncomputation. In the context of cloth simulation, BVHs are used to compute collision between a cloth and itself as well as with other objects.[6]
Another powerful use case for BVH is pair-wise distance computation. A naive approach to find the minimum distance between two set of objects would compute the distance between all of the pair-wise combinations. A BVH allows us to efficiently prune many of the comparisons without needing to compute potentially elaborate distance between the all objects. Pseudo code for computing pairwise distance between two set of objects and approaches for building BVH, well suited for distance calculation is discussed here[4]
BVH can significantly accelerate ray tracing applications by reducing the number of ray-surface intersection calculations. Hardware implementation of BVH operations such as traversal can further accelerate ray-tracing. Currently, real-time ray tracing is available on multiple platforms. Hardware implementation of BVH is one of the key innovations making it possible.
In 2018,Nvidiaintroduced RT Cores with theirTuring GPU architectureas part of the RTX platform. RT Cores are specialized hardware units designed to accelerate BVH traversal and ray-triangle intersection tests.[7]The combination of these key features enables real-time ray tracing that can be use for video games.[8]as well as design applications.
AMD'sRDNA (Radeon DNA) architecture, introduced in 2019, has incorporated hardware-accelerated ray tracing since its second iteration, RDNA 2. The architecture uses dedicated hardware units called Ray Accelerators to perform ray-box and ray-triangle intersection tests, which are crucial for traversing Bounding Volume Hierarchies (BVH).[9]In RDNA 2 and 3, the shader is responsible for traversing the BVH, while the Ray Accelerators handle intersection tests for box and triangle nodes.[10]
Originally designed to accelerate ray tracing, researchers are now exploring ways to leverage fast BVH traversal to speed up other applications. These include determining the containing tetrahedron for a point,[11]enhancing granular matter simulations,[12]and performing nearest neighbor calculations.[13]Some methods repurpose Nvidia's RT core components by reframing these tasks as ray-tracing problems.[12]This direction seems promising as substantial speedups in performance are reported across the various applications.
|
https://en.wikipedia.org/wiki/Bounding_volume_hierarchy
|
Cladistics(/kləˈdɪstɪks/klə-DIST-iks; fromAncient Greekκλάδοςkládos'branch')[1]is an approach tobiological classificationin whichorganismsare categorized in groups ("clades") based on hypotheses of most recentcommon ancestry. The evidence for hypothesized relationships is typically sharedderivedcharacteristics (synapomorphies) that are not present in more distant groups and ancestors. However, from an empirical perspective, common ancestors are inferences based on a cladistic hypothesis of relationships of taxa whosecharacter statescan be observed. Theoretically, a last common ancestor and all its descendants constitute a (minimal) clade. Importantly, all descendants stay in their overarching ancestral clade. For example, if the termswormsorfisheswere used within astrictcladistic framework, these terms would include humans. Many of these terms are normally usedparaphyletically, outside of cladistics, e.g. as a 'grade', which are fruitless to precisely delineate, especially when including extinct species.Radiationresults in the generation of new subclades by bifurcation, but in practice sexual hybridization may blur very closely related groupings.[2][3][4][5]
As a hypothesis, a clade can be rejected only if some groupings were explicitly excluded. It may then be found that the excluded group did actually descend from the last common ancestor of the group, and thus emerged within the group. ("Evolved from" is misleading, because in cladistics all descendants stay in the ancestral group). To keep only valid clades, upon finding that the group is paraphyletic this way, either such excluded groups should be granted to the clade, or the group should be abolished.[6]
Branches down to the divergence to the next significant (e.g. extant) sister are considered stem-groupings of the clade, but in principle each level stands on its own, to be assigned a unique name. For a fully bifurcated tree, adding a group to a tree also adds an additional (named) clade, and a new level on that branch. Specifically, also extinct groups are always put on a side-branch, not distinguishing whether an actual ancestor of other groupings was found.
The techniques and nomenclature of cladistics have been applied to disciplines other than biology. (Seephylogenetic nomenclature.)
Cladistics findings are posing a difficulty fortaxonomy, where the rank and (genus-)naming of established groupings may turn out to be inconsistent.
Cladistics is now the most commonly used method to classify organisms.[7]
The original methods used in cladistic analysis and the school of taxonomy derived from the work of the GermanentomologistWilli Hennig, who referred to it asphylogenetic systematics(also the title of his 1966 book); but the terms "cladistics" and "clade" were popularized by other researchers. Cladistics in the original sense refers to a particular set of methods used inphylogeneticanalysis, although it is now sometimes used to refer to the whole field.[8]
What is now called the cladistic method appeared as early as 1901 with a work byPeter Chalmers Mitchellfor birds[9][10]and subsequently byRobert John Tillyard(for insects) in 1921,[11]andW. Zimmermann(for plants) in 1943.[12]The term "clade" was introduced in 1958 byJulian Huxleyafter having been coined byLucien Cuénotin 1940,[13]"cladogenesis" in 1958,[14]"cladistic" byArthur Cainand Harrison in 1960,[15]"cladist" (for an adherent of Hennig's school) byErnst Mayrin 1965,[16]and "cladistics" in 1966.[14]Hennig referred to his own approach as "phylogenetic systematics". From the time of his original formulation until the end of the 1970s, cladistics competed as an analytical and philosophical approach to systematics withpheneticsand so-calledevolutionary taxonomy. Phenetics was championed at this time by thenumerical taxonomistsPeter SneathandRobert Sokal, and evolutionary taxonomy byErnst Mayr.[17]
Originally conceived, if only in essence, by Willi Hennig in a book published in 1950, cladistics did not flourish until its translation into English in 1966 (Lewin 1997). Today, cladistics is the most popular method for inferring phylogenetic trees from morphological data.
In the 1990s, the development of effectivepolymerase chain reactiontechniques allowed the application of cladistic methods tobiochemicalandmolecular genetictraits of organisms, vastly expanding the amount of data available for phylogenetics. At the same time, cladistics rapidly became popular in evolutionary biology, becausecomputersmade it possible to process large quantities of data about organisms and their characteristics.
The cladistic method interprets each shared character state transformation as a potential piece of evidence for grouping.Synapomorphies(shared, derived character states) are viewed as evidence of grouping, whilesymplesiomorphies(shared ancestral character states) are not. The outcome of a cladistic analysis is acladogram– atree-shaped diagram (dendrogram)[18]that is interpreted to represent the best hypothesis of phylogenetic relationships. Although traditionally such cladograms were generated largely on the basis of morphological characters and originally calculated by hand,genetic sequencingdata andcomputational phylogeneticsare now commonly used in phylogenetic analyses, and theparsimonycriterion has been abandoned by many phylogeneticists in favor of more "sophisticated" but less parsimonious evolutionary models of character state transformation. Cladists contend that these models are unjustified because there is no evidence that they recover more "true" or "correct" results from actual empirical data sets[19]
Every cladogram is based on a particular dataset analyzed with a particular method. Datasets are tables consisting ofmolecular, morphological,ethological[20]and/or other characters and a list ofoperational taxonomic units(OTUs), which may be genes, individuals, populations, species, or larger taxa that are presumed to be monophyletic and therefore to form, all together, one large clade; phylogenetic analysis infers the branching pattern within that clade. Different datasets and different methods, not to mention violations of the mentioned assumptions, often result in different cladograms. Only scientific investigation can show which is more likely to be correct.
Until recently, for example, cladograms like the following have generally been accepted as accurate representations of the ancestral relations among turtles, lizards, crocodilians, and birds:[21]
turtles
lizards
crocodilians
birds
If this phylogenetic hypothesis is correct, then the last common ancestor of turtles and birds, at the branch near the▼lived earlier than the last common ancestor of lizards and birds, near the♦. Mostmolecular evidence, however, produces cladograms more like this:[22]
lizards
turtles
crocodilians
birds
If this is accurate, then the last common ancestor of turtles and birds lived later than the last common ancestor of lizards and birds. Since the cladograms show two mutually exclusive hypotheses to describe the evolutionary history, at most one of them is correct.
The cladogram to the right represents the current universally accepted hypothesis that allprimates, includingstrepsirrhineslike thelemursandlorises, had a common ancestor all of whose descendants are or were primates, and so form a clade; the name Primates is therefore recognized for this clade. Within the primates, all anthropoids (monkeys, apes, and humans) are hypothesized to have had a common ancestor all of whose descendants are or were anthropoids, so they form the clade called Anthropoidea. The "prosimians", on the other hand, form a paraphyletic taxon. The name Prosimii is not used inphylogenetic nomenclature, which names only clades; the "prosimians" are instead divided between the cladesStrepsirhiniandHaplorhini, where the latter contains Tarsiiformes and Anthropoidea.
Lemurs and tarsiers may have looked closely related to humans, in the sense of being close on the evolutionary tree to humans. However, from the perspective of a tarsier, humans and lemurs would have looked close, in the exact same sense. Cladistics forces a neutral perspective, treating all branches (extant or extinct) in the same manner. It also forces one to try to make statements, and honestly take into account findings, about the exact historic relationships between the groups.
The following terms, coined by Hennig, are used to identify shared or distinct character states among groups:[23][24][25]
The terms plesiomorphy and apomorphy are relative; their application depends on the position of a group within a tree. For example, when trying to decide whether the tetrapods form a clade, an important question is whether having four limbs is a synapomorphy of the earliest taxa to be included within Tetrapoda: did all the earliest members of the Tetrapoda inherit four limbs from a common ancestor, whereas all other vertebrates did not, or at least not homologously? By contrast, for a group within the tetrapods, such as birds, having four limbs is a plesiomorphy. Using these two terms allows a greater precision in the discussion of homology, in particular allowing clear expression of the hierarchical relationships among different homologous features.
It can be difficult to decide whether a character state is in fact the same and thus can be classified as a synapomorphy, which may identify a monophyletic group, or whether it only appears to be the same and is thus a homoplasy, which cannot identify such a group. There is a danger of circular reasoning: assumptions about the shape of a phylogenetic tree are used to justify decisions about character states, which are then used as evidence for the shape of the tree.[28]Phylogeneticsuses various forms ofparsimonyto decide such questions; the conclusions reached often depend on the dataset and the methods. Such is the nature of empirical science, and for this reason, most cladists refer to their cladograms as hypotheses of relationship. Cladograms that are supported by a large number and variety of different kinds of characters are viewed as more robust than those based on more limited evidence.[29]
Mono-, para- and polyphyletic taxa can be understood based on the shape of the tree (as done above), as well as based on their character states.[24][25][30]These are compared in the table below.
Cladistics, either generally or in specific applications, has been criticized from its beginnings. Decisions as to whether particular character states arehomologous, a precondition of their being synapomorphies, have been challenged as involvingcircular reasoningand subjective judgements.[34]Of course, the potential unreliability of evidence is a problem for any systematic method, or for that matter, for any empirical scientific endeavor at all.[35][36]
Transformed cladisticsarose in the late 1970s[37]in an attempt to resolve some of these problems by removing a priori assumptions about phylogeny from cladistic analysis, but it has remained unpopular.[38]
The cladistic method does not identify fossil species as actual ancestors of a clade.[39]Instead, fossil taxa are identified as belonging to separate extinct branches. While a fossil species could be the actual ancestor of a clade, there is no way to know that. Therefore, a more conservative hypothesis is that the fossil taxon is related to other fossil and extant taxa, as implied by the pattern of shared apomorphic features.[40]
An otherwise extinct group with any extant descendants, is not considered (literally) extinct,[41]and for instance does not have a date of extinction.
Anything having to do with biology and sex is complicated and messy, and cladistics is no exception.[42]Many species reproduce sexually, and are capable of interbreeding for millions of years. Worse, during such a period, many branches may have radiated, and it may take hundreds of millions of years for them to have whittled down to just two.[43]Only then one can theoretically assign proper last common ancestors of groupings which do not inadvertently include earlier branches.[44]The process of true cladistic bifurcation can thus take a much more extended time than one is usually aware of.[45]In practice, for recent radiations, cladistically guided findings only give a coarse impression of the complexity. A more detailed account will give details about fractions of introgressions between groupings, and even geographic variations thereof. This has been used as an argument for the use of paraphyletic groupings,[44]but typically other reasons are quoted.
Horizontal gene transfer is the mobility of genetic info between different organisms that can have immediate or delayed effects for the reciprocal host.[46]There are several processes in nature which can causehorizontal gene transfer. This does typically not directly interfere with ancestry of the organism, but can complicate the determination of that ancestry. On another level, one can map the horizontal gene transfer processes, by determining the phylogeny of the individual genes using cladistics.
If there is unclarity in mutual relationships, there are a lot of possible trees. Assigning names to each possible clade may not be prudent. Furthermore, established names are discarded in cladistics, or alternatively carry connotations which may no longer hold, such as when additional groups are found to have emerged in them.[47]Naming changes are the direct result of changes in the recognition of mutual relationships, which often is still in flux, especially for extinct species. Hanging on to older naming and/or connotations is counter-productive, as they typically do not reflect actual mutual relationships precisely at all. E.g. Archaea, Asgard archaea, protists, slime molds, worms, invertebrata, fishes, reptilia, monkeys,Ardipithecus,Australopithecus,Homo erectusall containHomo sapienscladistically, in theirsensu latomeaning. For originally extinct stem groups,sensu latogenerally means generously keeping previously included groups, which then may come to include even living species. A prunedsensu strictomeaning is often adopted instead, but the group would need to be restricted to a single branch on the stem. Other branches then get their own name and level. This is commensurate to the fact that more senior stem branches are in fact closer related to the resulting group than the more basal stem branches; that those stem branches only may have lived for a short time does not affect that assessment in cladistics.
The comparisons used to acquire data on whichcladogramscan be based are not limited to the field of biology.[48]Any group of individuals or classes that are hypothesized to have a common ancestor, and to which a set of common characteristics may or may not apply, can be compared pairwise. Cladograms can be used to depict the hypothetical descent relationships within groups of items in many different academic realms. The only requirement is that the items have characteristics that can be identified and measured.
Anthropologyandarchaeology:[49]Cladistic methods have been used to reconstruct the development of cultures or artifacts using groups of cultural traits or artifact features.
Comparative mythologyandfolktaleuse cladistic methods to reconstruct the protoversion of many myths. Mythological phylogenies constructed with mythemes clearly support low horizontal transmissions (borrowings), historical (sometimes Palaeolithic) diffusions and punctuated evolution.[50]They also are a powerful way to test hypotheses about cross-cultural relationships among folktales.[51][52]
Literature: Cladistic methods have been used in the classification of the surviving manuscripts of theCanterbury Tales,[53]and the manuscripts of the SanskritCharaka Samhita.[54]
Historical linguistics:[55]Cladistic methods have been used to reconstruct the phylogeny of languages using linguistic features. This is similar to the traditionalcomparative methodof historical linguistics, but is more explicit in its use ofparsimonyand allows much faster analysis of large datasets (computational phylogenetics).
Textual criticismorstemmatics:[54][56]Cladistic methods have been used to reconstruct the phylogeny of manuscripts of the same work (and reconstruct the lost original) using distinctive copying errors as apomorphies. This differs from traditional historical-comparative linguistics in enabling the editor to evaluate and place in genetic relationship large groups of manuscripts with large numbers of variants that would be impossible to handle manually. It also enablesparsimonyanalysis of contaminated traditions of transmission that would be impossible to evaluate manually in a reasonable period of time.
Astrophysics[57]infers the history of relationships between galaxies to create branching diagram hypotheses of galaxy diversification.
Biology portalEvolutionary biology portal
|
https://en.wikipedia.org/wiki/Cladistics
|
Computational phylogenetics,phylogeny inference,orphylogenetic inferencefocuses on computational and optimizationalgorithms,heuristics, and approaches involved inphylogeneticanalyses. The goal is to find aphylogenetic treerepresenting optimal evolutionary ancestry between a set ofgenes,species, ortaxa.Maximum likelihood,parsimony,Bayesian, andminimum evolutionare typical optimality criteria used to assess how well a phylogenetic tree topology describes the sequence data.[1][2]Nearest Neighbour Interchange (NNI), Subtree Prune and Regraft (SPR), and Tree Bisection and Reconnection (TBR), known astree rearrangements, are deterministic algorithms to search for optimal or the best phylogenetic tree. The space and the landscape of searching for the optimal phylogenetic tree is known as phylogeny search space.
Maximum Likelihood (also likelihood) optimality criterion is the process of finding the tree topology along with its branch lengths that provides the highest probability observing the sequence data, while parsimony optimality criterion is the fewest number of state-evolutionary changes required for a phylogenetic tree to explain the sequence data.[1][2]
Traditional phylogenetics relies onmorphologicaldata obtained by measuring and quantifying thephenotypicproperties of representative organisms, while the more recent field of molecular phylogenetics usesnucleotidesequences encoding genes oramino acidsequences encodingproteinsas the basis for classification.
Many forms of molecular phylogenetics are closely related to and make extensive use ofsequence alignmentin constructing and refining phylogenetic trees, which are used to classify the evolutionary relationships between homologousgenesrepresented in thegenomesof divergent species. The phylogenetic trees constructed by computational methods are unlikely to perfectly reproduce theevolutionary treethat represents the historical relationships between the species being analyzed.[citation needed]The historical species tree may also differ from the historical tree of an individual homologous gene shared by those species.
Phylogenetic treesgenerated by computational phylogenetics can be eitherrootedorunrooteddepending on the input data and the algorithm used. A rooted tree is adirected graphthat explicitly identifies amost recent common ancestor(MRCA),[citation needed]usually an inputed sequence that is not represented in the input. Genetic distance measures can be used to plot a tree with the input sequences asleaf nodesand their distances from the root proportional to theirgenetic distancefrom the hypothesized MRCA. Identification of a root usually requires the inclusion in the input data of at least one "outgroup" known to be only distantly related to the sequences of interest.[citation needed]
By contrast, unrooted trees plot the distances and relationships between input sequences without making assumptions regarding their descent. An unrooted tree can always be produced from a rooted tree, but a root cannot usually be placed on an unrooted tree without additional data on divergence rates, such as the assumption of themolecular clockhypothesis.[3]
The set of all possible phylogenetic trees for a given group of input sequences can be conceptualized as a discretely defined multidimensional "tree space" through which search paths can be traced byoptimizationalgorithms. Although counting the total number of trees for a nontrivial number of input sequences can be complicated by variations in the definition of a tree topology, it is always true that there are more rooted than unrooted trees for a given number of inputs and choice of parameters.[2]
Both rooted and unrooted phylogenetic trees can be further generalized to rooted or unrootedphylogenetic networks, which allow for the modeling of evolutionary phenomena such ashybridizationorhorizontal gene transfer.[citation needed]
The basic problem in morphological phylogenetics is the assembly of amatrixrepresenting a mapping from each of the taxa being compared to representative measurements for each of the phenotypic characteristics being used as a classifier. The types of phenotypic data used to construct this matrix depend on the taxa being compared; for individual species, they may involve measurements of average body size, lengths or sizes of particular bones or other physical features, or even behavioral manifestations. Of course, since not every possible phenotypic characteristic could be measured and encoded for analysis, the selection of which features to measure is a major inherent obstacle to the method. The decision of which traits to use as a basis for the matrix necessarily represents a hypothesis about which traits of a species or higher taxon are evolutionarily relevant.[4]Morphological studies can be confounded by examples ofconvergent evolutionof phenotypes.[5]A major challenge in constructing useful classes is the high likelihood of inter-taxon overlap in the distribution of the phenotype's variation. The inclusion of extinct taxa in morphological analysis is often difficult due to absence of or incompletefossilrecords, but has been shown to have a significant effect on the trees produced; in one study only the inclusion of extinct species ofapesproduced a morphologically derived tree that was consistent with that produced from molecular data.[6]
Some phenotypic classifications, particularly those used when analyzing very diverse groups of taxa, are discrete and unambiguous; classifying organisms as possessing or lacking a tail, for example, is straightforward in the majority of cases, as is counting features such as eyes or vertebrae. However, the most appropriate representation of continuously varying phenotypic measurements is a controversial problem without a general solution. A common method is simply to sort the measurements of interest into two or more classes, rendering continuous observed variation as discretely classifiable (e.g., all examples with humerus bones longer than a given cutoff are scored as members of one state, and all members whose humerus bones are shorter than the cutoff are scored as members of a second state). This results in an easily manipulateddata setbut has been criticized for poor reporting of the basis for the class definitions and for sacrificing information compared to methods that use a continuous weighted distribution of measurements.[7]
Because morphological data is extremely labor-intensive to collect, whether from literature sources or from field observations, reuse of previously compiled data matrices is not uncommon, although this may propagate flaws in the original matrix into multiple derivative analyses.[8]
The problem of character coding is very different in molecular analyses, as the characters in biological sequence data are immediate and discretely defined - distinctnucleotidesinDNAorRNAsequences and distinctamino acidsinproteinsequences. However, defininghomologycan be challenging due to the inherent difficulties ofmultiple sequence alignment. For a given gapped MSA, several rooted phylogenetic trees can be constructed that vary in their interpretations of which changes are "mutations" versus ancestral characters, and which events areinsertion mutationsordeletion mutations. For example, given only a pairwise alignment with a gap region, it is impossible to determine whether one sequence bears an insertion mutation or the other carries a deletion. The problem is magnified in MSAs with unaligned and nonoverlapping gaps. In practice, sizable regions of a calculated alignment may be discounted in phylogenetic tree construction to avoid integrating noisy data into the tree calculation.[citation needed]
A tree built on a single gene as found in different organisms (orthologs) may not show sufficient phylogenetic signal for drawing strong conclusions. Adding more genes by concatenating their respectivemultiple sequence alignmentsinto a "supermatrix", effectively creating a huge virtual gene with more evolutionary changes available for tree inference. This naive method only works well on genes with similar evolutionary histories; for more complex cases (organellar+nuclear datasets or joint amino acid+nucleotide alignments), some algorithms allow for informing them where each gene starts and ends (data partitioning). Alternatively, one can infer several single-gene trees and combine them into a "supertree". With the advent ofphylogenomics, hundreds of genes may be analyzed at once.[9]
Distance-matrix methods of phylogenetic analysis explicitly rely on a measure of "genetic distance" between the sequences being classified, and therefore, they require an MSA as an input. Distance is often defined as the fraction of mismatches at aligned positions, with gaps either ignored or counted as mismatches.[3]Distance methods attempt to construct an all-to-all matrix from the sequence query set describing the distance between each sequence pair. From this is constructed a phylogenetic tree that places closely related sequences under the sameinterior nodeand whose branch lengths closely reproduce the observed distances between sequences. Distance-matrix methods may produce either rooted or unrooted trees, depending on the algorithm used to calculate them. They are frequently used as the basis for progressive and iterative types ofmultiple sequence alignments. The main disadvantage of distance-matrix methods is their inability to efficiently use information about local high-variation regions that appear across multiple subtrees.[2]
TheUPGMA(Unweighted Pair Group Method with Arithmetic mean) andWPGMA(Weighted Pair Group Method with Arithmetic mean) methods produce rooted trees and require a constant-rate assumption - that is, it assumes anultrametrictree in which the distances from the root to every branch tip are equal.[10]
Neighbor-joining methods apply generalcluster analysistechniques to sequence analysis using genetic distance as a clustering metric. The simpleneighbor-joiningmethod produces unrooted trees, but it does not assume a constant rate of evolution (i.e., amolecular clock) across lineages.[11]
TheFitch–Margoliash methoduses a weightedleast squaresmethod for clustering based on genetic distance.[12]Closely related sequences are given more weight in the tree construction process to correct for the increased inaccuracy in measuring distances between distantly related sequences. The distances used as input to the algorithm must be normalized to prevent large artifacts in computing relationships between closely related and distantly related groups. The distances calculated by this method must belinear; the linearity criterion for distances requires that theexpected valuesof the branch lengths for two individual branches must equal the expected value of the sum of the two branch distances - a property that applies to biological sequences only when they have been corrected for the possibility ofback mutationsat individual sites. This correction is done through the use of asubstitution matrixsuch as that derived from theJukes-Cantor modelof DNA evolution. The distance correction is only necessary in practice when the evolution rates differ among branches.[2]Another modification of the algorithm can be helpful, especially in case of concentrated distances (please refer toconcentration of measurephenomenon andcurse of dimensionality): that modification, described in,[13]has been shown to improve the efficiency of the algorithm and its robustness.
The least-squares criterion applied to these distances is more accurate but less efficient than the neighbor-joining methods. An additional improvement that corrects for correlations between distances that arise from many closely related sequences in the data set can also be applied at increased computational cost. Finding the optimal least-squares tree with any correction factor isNP-complete,[14]soheuristicsearch methods like those used in maximum-parsimony analysis are applied to the search through tree space.
Independent information about the relationship between sequences or groups can be used to help reduce the tree search space and root unrooted trees. Standard usage of distance-matrix methods involves the inclusion of at least oneoutgroupsequence known to be only distantly related to the sequences of interest in the query set.[3]This usage can be seen as a type ofexperimental control. If the outgroup has been appropriately chosen, it will have a much greatergenetic distanceand thus a longer branch length than any other sequence, and it will appear near the root of a rooted tree. Choosing an appropriate outgroup requires the selection of a sequence that is moderately related to the sequences of interest; too close a relationship defeats the purpose of the outgroup and too distant addsnoiseto the analysis.[3]Care should also be taken to avoid situations in which the species from which the sequences were taken are distantly related, but the gene encoded by the sequences is highlyconservedacross lineages.Horizontal gene transfer, especially between otherwise divergentbacteria, can also confound outgroup usage.[citation needed]
Maximum parsimony(MP) is a method of identifying the potential phylogenetic tree that requires the smallest total number ofevolutionaryevents to explain the observed sequence data. Some ways of scoring trees also include a "cost" associated with particular types of evolutionary events and attempt to locate the tree with the smallest total cost. This is a useful approach in cases where not every possible type of event is equally likely - for example, when particularnucleotidesoramino acidsare known to be more mutable than others.
The most naive way of identifying the most parsimonious tree is simple enumeration - considering each possible tree in succession and searching for the tree with the smallest score. However, this is only possible for a relatively small number of sequences or species because the problem of identifying the most parsimonious tree is known to beNP-hard;[2]consequently a number ofheuristicsearch methods foroptimizationhave been developed to locate a highly parsimonious tree, if not the best in the set. Most such methods involve asteepest descent-style minimization mechanism operating on atree rearrangementcriterion.
Thebranch and boundalgorithm is a general method used to increase the efficiency of searches for near-optimal solutions ofNP-hardproblems first applied to phylogenetics in the early 1980s.[15]Branch and bound is particularly well suited to phylogenetic tree construction because it inherently requires dividing a problem into atree structureas it subdivides the problem space into smaller regions. As its name implies, it requires as input both a branching rule (in the case of phylogenetics, the addition of the next species or sequence to the tree) and a bound (a rule that excludes certain regions of the search space from consideration, thereby assuming that the optimal solution cannot occupy that region). Identifying a good bound is the most challenging aspect of the algorithm's application to phylogenetics. A simple way of defining the bound is a maximum number of assumed evolutionary changes allowed per tree. A set of criteria known as Zharkikh's rules[16]severely limit the search space by defining characteristics shared by all candidate "most parsimonious" trees. The two most basic rules require the elimination of all but one redundant sequence (for cases where multiple observations have produced identical data) and the elimination of character sites at which two or more states do not occur in at least two species. Under ideal conditions these rules and their associated algorithm would completely define a tree.
The Sankoff-Morel-Cedergren algorithm was among the first published methods to simultaneously produce an MSA and a phylogenetic tree for nucleotide sequences.[17]The method uses amaximum parsimonycalculation in conjunction with a scoring function that penalizes gaps and mismatches, thereby favoring the tree that introduces a minimal number of such events (an alternative view holds that the trees to be favored are those that maximize the amount of sequence similarity that can be interpreted as homology, a point of view that may lead to different optimal trees[18]). The imputed sequences at theinterior nodesof the tree are scored and summed over all the nodes in each possible tree. The lowest-scoring tree sum provides both an optimal tree and an optimal MSA given the scoring function. Because the method is highly computationally intensive, an approximate method in which initial guesses for the interior alignments are refined one node at a time. Both the full and the approximate version are in practice calculated by dynamic programming.[2]
More recent phylogenetic tree/MSA methods use heuristics to isolate high-scoring, but not necessarily optimal, trees. The MALIGN method uses a maximum-parsimony technique to compute a multiple alignment by maximizing acladogramscore, and its companion POY uses an iterative method that couples the optimization of the phylogenetic tree with improvements in the corresponding MSA.[19]However, the use of these methods in constructing evolutionary hypotheses has been criticized as biased due to the deliberate construction of trees reflecting minimal evolutionary events.[20]This, in turn, has been countered by the view that such methods should be seen as heuristic approaches to find the trees that maximize the amount of sequence similarity that can be interpreted as homology.[18][21]
Themaximum likelihoodmethod uses standard statistical techniques for inferringprobability distributionsto assign probabilities to particular possible phylogenetic trees. The method requires asubstitution modelto assess the probability of particularmutations; roughly, a tree that requires more mutations at interior nodes to explain the observed phylogeny will be assessed as having a lower probability. This is broadly similar to the maximum-parsimony method, but maximum likelihood allows additional statistical flexibility by permitting varying rates of evolution across both lineages and sites. In fact, the method requires that evolution at different sites and along different lineages must bestatistically independent. Maximum likelihood is thus well suited to the analysis of distantly related sequences, but it is believed to be computationally intractable to compute due to its NP-hardness.[22]
The "pruning" algorithm, a variant ofdynamic programming, is often used to reduce the search space by efficiently calculating the likelihood of subtrees.[2]The method calculates the likelihood for each site in a "linear" manner, starting at a node whose only descendants are leaves (that is, the tips of the tree) and working backwards toward the "bottom" node in nested sets. However, the trees produced by the method are only rooted if the substitution model is irreversible, which is not generally true of biological systems. The search for the maximum-likelihood tree also includes a branch length optimization component that is difficult to improve upon algorithmically; generalglobal optimizationtools such as theNewton–Raphson methodare often used.
Some tools that use maximum likelihood to infer phylogenetic trees from variant allelic frequency data (VAFs) include AncesTree and CITUP.[23][24]
Bayesian inferencecan be used to produce phylogenetic trees in a manner closely related to the maximum likelihood methods. Bayesian methods assume a priorprobability distributionof the possible trees, which may simply be the probability of any one tree among all the possible trees that could be generated from the data, or may be a more sophisticated estimate derived from the assumption that divergence events such asspeciationoccur asstochastic processes. The choice of prior distribution is a point of contention among users of Bayesian-inference phylogenetics methods.[2]
Implementations of Bayesian methods generally useMarkov chain Monte Carlosampling algorithms, although the choice of move set varies; selections used in Bayesian phylogenetics include circularly permuting leaf nodes of a proposed tree at each step[25]and swapping descendant subtrees of a randominternal nodebetween two related trees.[26]The use of Bayesian methods in phylogenetics has been controversial, largely due to incomplete specification of the choice of move set, acceptance criterion, and prior distribution in published work.[2]Bayesian methods are generally held to be superior to parsimony-based methods; they can be more prone to long-branch attraction than maximum likelihood techniques,[27]although they are better able to accommodate missing data.[28]
Whereas likelihood methods find the tree that maximizes the probability of the data, a Bayesian approach recovers a tree that represents the most likely clades, by drawing on the posterior distribution. However, estimates of the posterior probability of clades (measuring their 'support') can be quite wide of the mark, especially in clades that aren't overwhelmingly likely. As such, other methods have been put forwards to estimate posterior probability.[29]
Some tools that use Bayesian inference to infer phylogenetic trees from variant allelic frequency data (VAFs) include Canopy, EXACT, and PhyloWGS.[30][31][32]
Molecular phylogenetics methods rely on a definedsubstitution modelthat encodes a hypothesis about the relative rates ofmutationat various sites along the gene or amino acid sequences being studied. At their simplest, substitution models aim to correct for differences in the rates oftransitionsandtransversionsin nucleotide sequences. The use of substitution models is necessitated by the fact that thegenetic distancebetween two sequences increases linearly only for a short time after the two sequences diverge from each other (alternatively, the distance is linear only shortly beforecoalescence). The longer the amount of time after divergence, the more likely it becomes that two mutations occur at the same nucleotide site. Simple genetic distance calculations will thus undercount the number of mutation events that have occurred in evolutionary history. The extent of this undercount increases with increasing time since divergence, which can lead to the phenomenon oflong branch attraction, or the misassignment of two distantly related but convergently evolving sequences as closely related.[33]The maximum parsimony method is particularly susceptible to this problem due to its explicit search for a tree representing a minimum number of distinct evolutionary events.[2]
All substitution models assign a set of weights to each possible change of state represented in the sequence. The most common model types are implicitly reversible because they assign the same weight to, for example, a G>C nucleotide mutation as to a C>G mutation. The simplest possible model, theJukes-Cantor model, assigns an equal probability to every possible change of state for a given nucleotide base. The rate of change between any two distinct nucleotides will be one-third of the overall substitution rate.[2]More advanced models distinguish betweentransitionsandtransversions. The most general possible time-reversible model, called the GTR model, has six mutation rate parameters. An even more generalized model known as the general 12-parameter model breaks time-reversibility, at the cost of much additional complexity in calculating genetic distances that are consistent among multiple lineages.[2]One possible variation on this theme adjusts the rates so that overall GC content - an important measure of DNA double helix stability - varies over time.[34]
Models may also allow for the variation of rates with positions in the input sequence. The most obvious example of such variation follows from the arrangement of nucleotides in protein-coding genes into three-basecodons. If the location of theopen reading frame(ORF) is known, rates of mutation can be adjusted for position of a given site within a codon, since it is known thatwobble base pairingcan allow for higher mutation rates in the third nucleotide of a given codon without affecting the codon's meaning in thegenetic code.[33]A less hypothesis-driven example that does not rely on ORF identification simply assigns to each site a rate randomly drawn from a predetermined distribution, often thegamma distributionorlog-normal distribution.[2]Finally, a more conservative estimate of rate variations known as thecovarionmethod allowsautocorrelatedvariations in rates, so that the mutation rate of a given site is correlated across sites and lineages.[35]
The selection of an appropriate model is critical for the production of good phylogenetic analyses, both because underparameterized or overly restrictive models may produce aberrant behavior when their underlying assumptions are violated, and because overly complex or overparameterized models are computationally expensive and the parameters may be overfit.[33]The most common method of model selection is thelikelihood ratio test(LRT), which produces a likelihood estimate that can be interpreted as a measure of "goodness of fit" between the model and the input data.[33]However, care must be taken in using these results, since a more complex model with more parameters will always have a higher likelihood than a simplified version of the same model, which can lead to the naive selection of models that are overly complex.[2]For this reason model selection computer programs will choose the simplest model that is not significantly worse than more complex substitution models. A significant disadvantage of the LRT is the necessity of making a series of pairwise comparisons between models; it has been shown that the order in which the models are compared has a major effect on the one that is eventually selected.[36]
An alternative model selection method is theAkaike information criterion(AIC), formally an estimate of theKullback–Leibler divergencebetween the true model and the model being tested. It can be interpreted as a likelihood estimate with a correction factor to penalize overparameterized models.[33]The AIC is calculated on an individual model rather than a pair, so it is independent of the order in which models are assessed. A related alternative, theBayesian information criterion(BIC), has a similar basic interpretation but penalizes complex models more heavily.[33]Determining the most suitable model for phylogeny reconstruction constitutes a fundamental step in numerous evolutionary studies. However, various criteria for model selection are leading to debate over which criterion is preferable. It has recently been shown that, when topologies and ancestral sequence reconstruction are the desired output, choosing one criterion over another is not crucial. Instead, using the most complex nucleotide substitution model, GTR+I+G, leads to similar results for the inference of tree topology and ancestral sequences.[37]
A comprehensive step-by-step protocol on constructing phylogenetic trees, including DNA/Amino Acid contiguous sequence assembly, multiple sequence alignment, model-test (testing best-fitting substitution models) and phylogeny reconstruction using Maximum Likelihood and Bayesian Inference, is available atProtocol Exchange[38]
A non traditional way of evaluating the phylogenetic tree is to compare it with clustering result. One can use a Multidimensional Scaling technique, so called Interpolative Joining to dodimensionality reductionto visualize the clustering result for the sequences in 3D, and then map the phylogenetic tree onto the clustering result. A better tree usually has a higher correlation with the clustering result.[39]
As with all statistical analysis, the estimation of phylogenies from character data requires an evaluation of confidence. A number of methods exist to test the amount of support for a phylogenetic tree, either by evaluating the support for each sub-tree in the phylogeny (nodal support) or evaluating whether the phylogeny is significantly different from other possible trees (alternative tree hypothesis tests).
The most common method for assessing tree support is to evaluate the statistical support for each node on the tree. Typically, a node with very low support is not considered valid in further analysis, and visually may be collapsed into apolytomyto indicate that relationships within a clade are unresolved.
Many methods for assessing nodal support involve consideration of multiple phylogenies. The consensus tree summarizes the nodes that are shared among a set of trees.[40]In a *strict consensus,* only nodes found in every tree are shown, and the rest are collapsed into an unresolvedpolytomy. Less conservative methods, such as the *majority-rule consensus* tree, consider nodes that are supported by a given percentage of trees under consideration (such as at least 50%).
For example, in maximum parsimony analysis, there may be many trees with the same parsimony score. A strict consensus tree would show which nodes are found in all equally parsimonious trees, and which nodes differ. Consensus trees are also used to evaluate support on phylogenies reconstructed with Bayesian inference (see below).
In statistics, thebootstrapis a method for inferring the variability of data that has an unknown distribution using pseudoreplications of the original data. For example, given a set of 100 data points, apseudoreplicateis a data set of the same size (100 points) randomly sampled from the original data, with replacement. That is, each original data point may be represented more than once in the pseudoreplicate, or not at all. Statistical support involves evaluation of whether the original data has similar properties to a large set of pseudoreplicates.
In phylogenetics, bootstrapping is conducted using the columns of the character matrix. Each pseudoreplicate contains the same number of species (rows) and characters (columns) randomly sampled from the original matrix, with replacement. A phylogeny is reconstructed from each pseudoreplicate, with the same methods used to reconstruct the phylogeny from the original data. For each node on the phylogeny, the nodal support is the percentage of pseudoreplicates containing that node.[41]
The statistical rigor of the bootstrap test has been empirically evaluated using viral populations with known evolutionary histories,[42]finding that 70% bootstrap support corresponds to a 95% probability that the clade exists. However, this was tested under ideal conditions (e.g. no change in evolutionary rates, symmetric phylogenies). In practice, values above 70% are generally supported and left to the researcher or reader to evaluate confidence. Nodes with support lower than 70% are typically considered unresolved.
Jackknifing in phylogenetics is a similar procedure, except the columns of the matrix are sampled without replacement. Pseudoreplicates are generated by randomly subsampling the data—for example, a "10% jackknife" would involve randomly sampling 10% of the matrix many times to evaluate nodal support.
Reconstruction of phylogenies usingBayesian inferencegenerates a posterior distribution of highly probable trees given the data and evolutionary model, rather than a single "best" tree. The trees in the posterior distribution generally have many different topologies. When the input data is variant allelic frequency data (VAF), the tool EXACT can compute the probabilities of trees exactly, for small, biologically relevant tree sizes, by exhaustively searching the entire tree space.[30]
Most Bayesian inference methods utilize a Markov-chain Monte Carlo iteration, and the initial steps of this chain are not considered reliable reconstructions of the phylogeny. Trees generated early in the chain are usually discarded asburn-in. The most common method of evaluating nodal support in a Bayesian phylogenetic analysis is to calculate the percentage of trees in the posterior distribution (post-burn-in) which contain the node.
The statistical support for a node in Bayesian inference is expected to reflect the probability that a clade really exists given the data and evolutionary model.[43]Therefore, the threshold for accepting a node as supported is generally higher than for bootstrapping.
Bremer supportcounts the number of extra steps needed to contradict a clade.
These measures each have their weaknesses. For example, smaller or larger clades tend to attract larger support values than mid-sized clades, simply as a result of the number of taxa in them.[44]
Bootstrap support can provide high estimates of node support as a result of noise in the data rather than the true existence of a clade.[45]
Ultimately, there is no way to measure whether a particular phylogenetic hypothesis is accurate or not, unless the true relationships among the taxa being examined are already known (which may happen with bacteria or viruses under laboratory conditions). The best result an empirical phylogeneticist can hope to attain is a tree with branches that are well supported by the available evidence. Several potential pitfalls have been identified:
Certain characters are more likely toevolve convergentlythan others; logically, such characters should be given less weight in the reconstruction of a tree.[46]Weights in the form of a model of evolution can be inferred from sets of molecular data, so thatmaximum likelihoodorBayesianmethods can be used to analyze them. For molecular sequences, this problem is exacerbated when the taxa under study have diverged substantially. As time since the divergence of two taxa increase, so does the probability of multiple substitutions on the same site, or back mutations, all of which result in homoplasies. For morphological data, unfortunately, the only objective way to determine convergence is by the construction of a tree – a somewhat circular method. Even so, weighting homoplasious characters[how?]does indeed lead to better-supported trees.[46]Further refinement can be brought by weighting changes in one direction higher than changes in another; for instance, the presence of thoracic wings almost guarantees placement among the pterygote insects because, although wings are often lost secondarily, there is no evidence that they have been gained more than once.[47]
In general, organisms can inherit genes in two ways: vertical gene transfer andhorizontal gene transfer. Vertical gene transfer is the passage of genes from parent to offspring, and horizontal (also called lateral) gene transfer occurs when genes jump between unrelated organisms, a common phenomenon especially inprokaryotes; a good example of this is the acquiredantibiotic resistanceas a result of gene exchange between various bacteria leading to multi-drug-resistant bacterial species. There have also been well-documented cases of horizontal gene transferbetween eukaryotes.
Horizontal gene transfer has complicated the determination of phylogenies of organisms, and inconsistencies in phylogeny have been reported among specific groups of organisms depending on the genes used to construct evolutionary trees. The only way to determine which genes have been acquired vertically and which horizontally is toparsimoniouslyassume that the largest set of genes that have been inherited together have been inherited vertically; this requires analyzing a large number of genes.
The basic assumption underlying the mathematical model of cladistics is a situation where species split neatly in bifurcating fashion. While such an assumption may hold on a larger scale (bar horizontal gene transfer, see above),speciationis often much less orderly. Research since the cladistic method was introduced has shown thathybrid speciation, once thought rare, is in fact quite common, particularly in plants.[48][49]Alsoparaphyletic speciationis common, making the assumption of a bifurcating pattern unsuitable, leading tophylogenetic networksrather than trees.[50][51]Introgressioncan also move genes between otherwise distinct species and sometimes even genera,[52]complicating phylogenetic analysis based on genes.[53]This phenomenon can contribute to "incomplete lineage sorting" and is thought to be a common phenomenon across a number of groups. In species level analysis this can be dealt with by larger sampling or better whole genome analysis.[54]Often the problem is avoided by restricting the analysis to fewer, not closely related specimens.
Owing to the development of advanced sequencing techniques inmolecular biology, it has become feasible to gather large amounts of data (DNA or amino acid sequences) to infer phylogenetic hypotheses. For example, it is not rare to find studies with character matrices based on wholemitochondrialgenomes (~16,000 nucleotides, in many animals). However, simulations have shown that it is more important to increase the number of taxa in the matrix than to increase the number of characters, because the more taxa there are, the more accurate and more robust is the resulting phylogenetic tree.[55][56]This may be partly due to the breaking up oflong branches.
Another important factor that affects the accuracy of tree reconstruction is whether the data analyzed actually contain a useful phylogenetic signal, a term that is used generally to denote whether a character evolves slowly enough to have the same state in closely related taxa as opposed to varying randomly. Tests for phylogenetic signal exist.[57]
Morphological characters that sample a continuum may contain phylogenetic signal, but are hard to code as discrete characters. Several methods have been used, one of which is gap coding, and there are variations on gap coding.[58]In the original form of gap coding:[58]
group means for a character are first ordered by size. The pooled within-group standard deviation is calculated ... and differences between adjacent means ... are compared relative to this standard deviation. Any pair of adjacent means is considered different and given different integer scores ... if the means are separated by a "gap" greater than the within-group standard deviation ... times some arbitrary constant.
If more taxa are added to the analysis, the gaps between taxa may become so small that all information is lost. Generalized gap coding works around that problem by comparing individual pairs of taxa rather than considering one set that contains all of the taxa.[58]
In general, the more data that are available when constructing a tree, the more accurate and reliable the resulting tree will be. Missing data are no more detrimental than simply having fewer data, although the impact is greatest when most of the missing data are in a small number of taxa. Concentrating the missing data across a small number of characters produces a more robust tree.[59]
Because many characters involve embryological, or soft-tissue or molecular characters that (at best) hardly ever fossilize, and the interpretation of fossils is more ambiguous than that ofliving taxa, extinct taxa almost invariably have higher proportions of missing data than living ones. However, despite these limitations, the inclusion of fossils is invaluable, as they can provide information in sparse areas of trees, breaking up long branches and constraining intermediate character states; thus, fossil taxa contribute as much to tree resolution as modern taxa.[60]Fossils can also constrain the age of lineages and thus demonstrate how consistent a tree is with the stratigraphic record;[1]stratocladisticsincorporates age information into data matrices for phylogenetic analyses.
|
https://en.wikipedia.org/wiki/Computational_phylogenetics
|
CURE(Clustering Using REpresentatives) is an efficientdata clusteringalgorithm for largedatabases[citation needed]. Compared withK-means clusteringit is morerobusttooutliersand able to identify clusters having non-spherical shapes and size variances.
The popularK-means clusteringalgorithm minimizes thesum of squared errorscriterion:
Given large differences in sizes or geometries of different clusters, the square error method could split the large clusters to minimize the square error, which is not always correct. Also, with hierarchic clustering algorithms these problems exist as none of the distance measures between clusters (dmin,dmean{\displaystyle d_{min},d_{mean}}) tend to work with different cluster shapes. Also therunning timeis high when n is large.
The problem with theBIRCH algorithmis that once the clusters are generated after step 3, it uses centroids of the clusters and assigns eachdata pointto the cluster with the closest centroid.[citation needed]Using only the centroid to redistribute the data has problems when clusters lack uniform sizes and shapes.
To avoid the problems with non-uniform sized or shaped clusters, CURE employs ahierarchical clusteringalgorithm that adopts amiddle groundbetween the centroid based and all point extremes. In CURE, a constant number c of well scattered points of a cluster are chosen and they are shrunk towards the centroid of the cluster by a fraction α. The scattered points after shrinking are used as representatives of the cluster. The clusters with the closest pair of representatives are the clusters that are merged at each step of CURE's hierarchical clustering algorithm. This enables CURE to correctly identify the clusters and makes it less sensitive to outliers.
Running time isO(n2logn){\displaystyle O(n^{2}\log n)}, making it rather expensive, andspace complexityisO(n){\displaystyle O(n)}.
The algorithm cannot be directly applied to large databases because of the high runtime complexity. Enhancements address this requirement.
CURE (no. of points,k)
Input: A set of points S
Output:kclusters
|
https://en.wikipedia.org/wiki/CURE_data_clustering_algorithm
|
In the study ofhierarchical clustering,Dasgupta's objectiveis a measure of the quality of a clustering, defined from asimilarity measureon the elements to be clustered. It is named after Sanjoy Dasgupta, who formulated it in 2016.[1]Its key property is that, when the similarity comes from anultrametric space, the optimal clustering for this quality measure follows the underlying structure of the ultrametric space. In this sense, clustering methods that produce good clusterings for this objective can be expected to approximate theground truthunderlying the given similarity measure.[2]
In Dasgupta's formulation, the input to a clustering problem consists of similarity scores between certain pairs of elements, represented as anundirected graphG=(V,E){\displaystyle G=(V,E)}, with the elements as its vertices and with non-negative real weights on its edges. Large weights indicate elements that should be considered more similar to each other, while small weights or missing edges indicate pairs of elements that are not similar. A hierarchical clustering can be described as a tree (not necessarily a binary tree) whose leaves are the elements to be clustered; the clusters are then the subsets of elements descending from each tree node, and the size|C|{\displaystyle |C|}of any clusterC{\displaystyle C}is its number of elements. For each edgeuv{\displaystyle uv}of the input graph, letw(uv){\displaystyle w(uv)}denote the weight of edgeuv{\displaystyle uv}and letC(uv){\displaystyle C(uv)}denote the smallest cluster of a given clustering that contains bothu{\displaystyle u}andv{\displaystyle v}. Then Dasgupta defines the cost of a clustering to be[1]
The optimal clustering for this objective isNP-hardto find. However, it is possible to find a clustering that approximates the minimum value of the objective inpolynomial timeby a divisive (top-down) clustering algorithm that repeatedly subdivides the elements using anapproximation algorithmfor thesparsest cut problem, the problem of finding a partition that minimizes the ratio of the total weight of cut edges to the total number of cut pairs.[1]Equivalently, for purposes of approximation, one may minimize the ratio of the total weight of cut edges to the number of elements on the smaller side of the cut. Using the best known approximation for the sparsest cut problem, theapproximation ratioof this approach isO(logn){\displaystyle O({\sqrt {\log n}})}.[3]
Thisstatistics-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Dasgupta%27s_objective
|
Adendrogramis adiagramrepresenting atree graph. This diagrammatic representation is frequently used in different contexts:
The namedendrogramderives from the twoancient greekwordsδένδρον(déndron), meaning "tree", andγράμμα(grámma), meaning "drawing, mathematical figure".[7][8]
For a clustering example, suppose that five taxa (a{\displaystyle a}toe{\displaystyle e}) have been clustered byUPGMAbased on a matrix ofgenetic distances. Thehierarchical clusteringdendrogram would show a column of five nodes representing the initial data (here individual taxa), and the remaining nodes represent the clusters to which the data belong, with the arrows representing the distance (dissimilarity). The distance between merged clusters is monotone, increasing with the level of the merger: the height of each node in the plot is proportional to the value of the intergroup dissimilarity between its two daughters (the nodes on the right representing individual observations all plotted at zero height).
|
https://en.wikipedia.org/wiki/Dendrogram
|
Determining the number of clusters in adata set, a quantity often labelledkas in thek-means algorithm, is a frequent problem indata clustering, and is a distinct issue from the process of actually solving the clustering problem.
For a certain class ofclustering algorithms(in particulark-means,k-medoidsandexpectation–maximization algorithm), there is a parameter commonly referred to askthat specifies the number of clusters to detect. Other algorithms such asDBSCANandOPTICS algorithmdo not require the specification of this parameter;hierarchical clusteringavoids the problem altogether.
The correct choice ofkis often ambiguous, with interpretations depending on the shape and scale of the distribution of points in a data set and the desired clustering resolution of the user. In addition, increasingkwithout penalty will always reduce the amount of error in the resulting clustering, to the extreme case of zero error if each data point is considered its own cluster (i.e., whenkequals the number of data points,n). Intuitively then,the optimal choice ofkwill strike a balance between maximum compression of the data using a single cluster, and maximum accuracy by assigning each data point to its own cluster. If an appropriate value ofkis not apparent from prior knowledge of the properties of the data set, it must be chosen somehow. There are several categories of methods for making this decision.
Theelbow methodlooks at the percentage ofexplained varianceas a function of the number of clusters:
One should choose a number of clusters so that adding another cluster does not give much better modeling of the data.
More precisely, if one plots the percentage of variance explained by the clusters against the number of clusters, the first clusters will add much information (explain a lot of variance), but at some point the marginal gain will drop, giving an angle in the graph. The number of clusters is chosen at this point, hence the "elbow criterion".
In most datasets, this "elbow" is ambiguous,[1]making this method subjective and unreliable. Because the scale of the axes is arbitrary, the concept of an angle is not well-defined, and even on uniform random data, the curve produces an "elbow", making the method rather unreliable.[2]Percentage of variance explained is the ratio of the between-group variance to the total variance, also known as anF-test. A slight variation of this method plots the curvature of the within group variance.[3]
The method can be traced to speculation byRobert L. Thorndikein 1953.[4]While the idea of the elbow method sounds simple and straightforward, other methods (as detailed below) give better results.
In statistics anddata mining,X-means clusteringis a variation ofk-means clusteringthat refines cluster assignments by repeatedly attempting subdivision, and keeping the best resulting splits, until a criterion such as theAkaike information criterion(AIC) orBayesian information criterion(BIC) is reached.[5]
Another set of methods for determining the number of clusters are information criteria, such as theAkaike information criterion(AIC),Bayesian information criterion(BIC), or thedeviance information criterion(DIC) — if it is possible to make alikelihood functionfor the clustering model.
For example: Thek-means model is "almost" aGaussian mixture modeland one can construct a likelihood for the Gaussian mixture model and thus also determine information criterion values.[6]
Rate distortion theoryhas been applied to choosingkcalled the "jump" method, which determines the number of clusters that maximizes efficiency while minimizing error byinformation-theoreticstandards.[7]The strategy of the algorithm is to generate a distortion curve for the input data by running a standard clustering algorithm such ask-meansfor all values ofkbetween 1 andn, and computing the distortion (described below) of the resulting clustering. The distortion curve is then transformed by a negative power chosen based on thedimensionalityof the data. Jumps in the resulting values then signify reasonable choices fork, with the largest jump representing the best choice.
The distortion of a clustering of some input data is formally defined as follows: Let the data set be modeled as ap-dimensionalrandom variable,X, consisting of amixture distributionofGcomponents with commoncovariance,Γ. If we letc1…cK{\displaystyle c_{1}\ldots c_{K}}be a set ofKcluster centers, withcX{\displaystyle c_{X}}the closest center to a given sample ofX, then the minimum average distortion per dimension when fitting theKcenters to the data is:
This is also the averageMahalanobis distanceper dimension betweenXand the closest cluster centercX{\displaystyle c_{X}}. Because the minimization over all possible sets of cluster centers is prohibitively complex, the distortion is computed in practice by generating a set of cluster centers using a standard clustering algorithm and computing the distortion using the result. The pseudo-code for the jump method with an input set ofp-dimensional data pointsXis:
The choice of the transform powerY=(p/2){\displaystyle Y=(p/2)}is motivated byasymptotic reasoningusing results from rate distortion theory. Let the dataXhave a single, arbitrarilyp-dimensionalGaussian distribution, and let fixedK=⌊αp⌋{\displaystyle K=\lfloor \alpha ^{p}\rfloor }, for someαgreater than zero. Then the distortion of a clustering ofKclusters in thelimitaspgoes to infinity isα−2{\displaystyle \alpha ^{-2}}. It can be seen that asymptotically, the distortion of a clustering to the power(−p/2){\displaystyle (-p/2)}is proportional toαp{\displaystyle \alpha ^{p}}, which by definition is approximately the number of clustersK. In other words, for a single Gaussian distribution, increasingKbeyond the true number of clusters, which should be one, causes a linear growth in distortion. This behavior is important in the general case of a mixture of multiple distribution components.
LetXbe a mixture ofGp-dimensional Gaussian distributions with common covariance. Then for any fixedKless thanG, the distortion of a clustering aspgoes to infinity is infinite. Intuitively, this means that a clustering of less than the correct number of clusters is unable to describe asymptotically high-dimensional data, causing the distortion to increase without limit. If, as described above,Kis made an increasing function ofp, namely,K=⌊αp⌋{\displaystyle K=\lfloor \alpha ^{p}\rfloor }, the same result as above is achieved, with the value of the distortion in the limit aspgoes to infinity being equal toα−2{\displaystyle \alpha ^{-2}}. Correspondingly, there is the same proportional relationship between the transformed distortion and the number of clusters,K.
Putting the results above together, it can be seen that for sufficiently high values ofp, the transformed distortiondK−p/2{\displaystyle d_{K}^{-p/2}}is approximately zero forK<G, then jumps suddenly and begins increasing linearly forK≥G. The jump algorithm for choosingKmakes use of these behaviors to identify the most likely value for the true number of clusters.
Although the mathematical support for the method is given in terms of asymptotic results, the algorithm has beenempiricallyverified to work well in a variety of data sets with reasonable dimensionality. In addition to the localized jump method described above, there exists a second algorithm for choosingKusing the same transformed distortion values known as the broken line method. The broken line method identifies the jump point in the graph of the transformed distortion by doing a simpleleast squareserror line fit of two line segments, which in theory will fall along thex-axis forK<G, and along the linearly increasing phase of the transformed distortion plot forK≥G. The broken line method is more robust than the jump method in that its decision is global rather than local, but it also relies on the assumption of Gaussian mixture components, whereas the jump method is fullynon-parametricand has been shown to be viable for general mixture distributions.
The averagesilhouetteof the data is another useful criterion for assessing the natural number of clusters. The silhouette of a data instance is a measure of how closely it is matched to data within its cluster and how loosely it is matched to data of the neighboring cluster, i.e., the cluster whose average distance from the datum is lowest.[8]A silhouette close to 1 implies the datum is in an appropriate cluster, while a silhouette close to −1 implies the datum is in the wrong cluster. Optimization techniques such asgenetic algorithmsare useful in determining the number of clusters that gives rise to the largest silhouette.[9]It is also possible to re-scale the data in such a way that the silhouette is more likely to be maximized at the correct number of clusters.[10]
One can also use the process ofcross-validationto analyze the number of clusters. In this process, the data is partitioned intovparts. Each of the parts is then set aside at turn as a test set, a clustering model computed on the otherv− 1 training sets, and the value of the objective function (for example, the sum of the squared distances to the centroids fork-means) calculated for the test set. Thesevvalues are calculated and averaged for each alternative number of clusters, and the cluster number selected such that further increase in number of clusters leads to only a small reduction in the objective function.[citation needed]
When clustering text databases with the cover coefficient on a document collection defined by a document by term Dmatrix(of size m×n, where m is the number of documents and n is the number of terms), the number of clusters can roughly be estimated by the formulamnt{\displaystyle {\tfrac {mn}{t}}}where t is the number of non-zero entries in D. Note that in D each row and each column must contain at least one non-zero element.[11]
Kernel matrix defines the proximity of the input information. For example, in Gaussianradial basis function, it determines thedot productof the inputs in a higher-dimensional space, calledfeature space. It is believed that the data become more linearly separable in the feature space, and hence, linear algorithms can be applied on the data with a higher success.
The kernel matrix can thus be analyzed in order to find the optimal number of clusters.[12]The method proceeds by the eigenvalue decomposition of the kernel matrix. It will then analyze the eigenvalues and eigenvectors to obtain a measure of the compactness of the input distribution. Finally, a plot will be drawn, where the elbow of that plot indicates the optimal number of clusters in the data set. Unlike previous methods, this technique does not need to perform any clustering a-priori. It directly finds the number of clusters from the data.
Robert Tibshirani, Guenther Walther, andTrevor Hastieproposed estimating the number of clusters in a data set via the gap statistic.[13]The gap statistics, based on theoretical grounds, measures how far is the pooled within-cluster sum of squares around the cluster centers from the sum of squares expected under the null reference distribution of data.
The expected value is estimated by simulating null reference data of characteristics of the original data, but lacking any clusters in it.
The optimal number of clusters is then estimated as the value ofkfor which the observed sum of squares falls farthest below the null reference.
Unlike many previous methods, the gap statistics can tell us that there is no value ofkfor which there is a good clustering, but the reliability depends on how plausible the assumed null distribution (e.g., a uniform distribution) is on the given data. This tends to work well in synthetic settings, but cannot handle difficult data sets with, e.g., uninformative attributes well because it assumes all attributes to be equally important.[14]
The gap statistics is implemented as theclusGapfunction in theclusterpackage[15]inR.
|
https://en.wikipedia.org/wiki/Determining_the_number_of_clusters_in_a_data_set
|
Hierarchical clusteringis one method for findingcommunity structuresin anetwork. The technique arranges the network into a hierarchy of groups according to a specified weight function. The data can then be represented in a tree structure known as adendrogram. Hierarchical clustering can either beagglomerativeordivisivedepending on whether one proceeds through the algorithm by adding links to or removing links from the network, respectively. One divisive technique is theGirvan–Newman algorithm.
In the hierarchical clustering algorithm, aweightWij{\displaystyle W_{ij}}is first assigned to each pair ofvertices(i,j){\displaystyle (i,j)}in the network. The weight, which can vary depending on implementation (see section below), is intended to indicate how closely related the vertices are. Then, starting with all the nodes in the network disconnected, begin pairing nodes from highest to lowest weight between the pairs (in the divisive case, start from the original network and remove links from lowest to highest weight). As links are added, connected subsets begin to form. These represent the network's community structures.
The components at each iterative step are always a subset of other structures. Hence, the subsets can be represented using a tree diagram, ordendrogram. Horizontal slices of the tree at a given level indicate the communities that exist above and below a value of the weight.
There are many possible weights for use in hierarchical clustering algorithms. The specific weight used is dictated by the data as well as considerations for computational speed. Additionally, the communities found in the network are highly dependent on the choice of weighting function. Hence, when compared to real-world data with a known community structure, the various weighting techniques have been met with varying degrees of success.
Two weights that have been used previously with varying success are the number of node-independent paths between each pair of vertices and the total number of paths between vertices weighted by the length of the path. One disadvantage of these weights, however, is that both weighting schemes tend to separate single peripheral vertices from their rightful communities because of the small number of paths going to these vertices. For this reason, their use in hierarchical clustering techniques is far from optimal.[1]
Edgebetweenness centralityhas been used successfully as a weight in theGirvan–Newman algorithm.[1]This technique is similar to a divisive hierarchical clustering algorithm, except the weights are recalculated with each step.
The change inmodularityof the network with the addition of a node has also been used successfully as a weight.[2]This method provides a computationally less-costly alternative to the Girvan-Newman algorithm while yielding similar results.
|
https://en.wikipedia.org/wiki/Hierarchical_clustering_of_networks
|
In the theory ofcluster analysis, thenearest-neighbor chain algorithmis analgorithmthat can speed up several methods foragglomerative hierarchical clustering. These are methods that take a collection of points as input, and create a hierarchy of clusters of points by repeatedly merging pairs of smaller clusters to form larger clusters. The clustering methods that the nearest-neighbor chain algorithm can be used for includeWard's method,complete-linkage clustering, andsingle-linkage clustering; these all work by repeatedly merging the closest two clusters but use different definitions of the distance between clusters. The cluster distances for which the nearest-neighbor chain algorithm works are calledreducibleand are characterized by a simple inequality among certain cluster distances.
The main idea of the algorithm is to find pairs of clusters to merge by followingpathsin thenearest neighbor graphof the clusters. Every such path will eventually terminate at a pair of clusters that are nearest neighbors of each other, and the algorithm chooses that pair of clusters as the pair to merge. In order to save work by re-using as much as possible of each path, the algorithm uses astack data structureto keep track of each path that it follows. By following paths in this way, the nearest-neighbor chain algorithm merges its clusters in a different order than methods that always find and merge the closest pair of clusters. However, despite that difference, it always generates the same hierarchy of clusters.
The nearest-neighbor chain algorithm constructs a clustering in time proportional to the square of the number of points to be clustered. This is also proportional to the size of its input, when the input is provided in the form of an explicitdistance matrix. The algorithm uses an amount of memory proportional to the number of points, when it is used for clustering methods such as Ward's method that allow constant-time calculation of the distance between clusters. However, for some other clustering methods it uses a larger amount of memory in an auxiliary data structure with which it keeps track of the distances between pairs of clusters.
Many problems indata analysisconcernclustering, grouping data items into clusters of closely related items.Hierarchical clusteringis a version of cluster analysis in which the clusters form a hierarchy or tree-like structure rather than a strict partition of the data items. In some cases, this type of clustering may be performed as a way of performing cluster analysis at multiple different scales simultaneously. In others, the data to be analyzed naturally has an unknown tree structure and the goal is to recover that structure by performing the analysis. Both of these kinds of analysis can be seen, for instance, in the application of hierarchical clustering tobiological taxonomy. In this application, different living things are grouped into clusters at different scales or levels of similarity (species, genus, family, etc). This analysis simultaneously gives a multi-scale grouping of the organisms of the present age, and aims to accurately reconstruct the branching process orevolutionary treethat in past ages produced these organisms.[1]
The input to a clustering problem consists of a set of points.[2]Aclusteris any proper subset of the points, and a hierarchical clustering is amaximalfamily of clusters with the property that any two clusters in the family are either nested ordisjoint.
Alternatively, a hierarchical clustering may be represented as abinary treewith the points at its leaves; the clusters of the clustering are the sets of points in subtrees descending from each node of the tree.[3]
In agglomerative clustering methods, the input also includes a distance function defined on the points, or a numerical measure of their dissimilarity.
The distance or dissimilarity should be symmetric: the distance between two points does not depend on which of them is considered first.
However, unlike the distances in ametric space, it is not required to satisfy thetriangle inequality.[2]Next, the dissimilarity function is extended from pairs of points to pairs of clusters. Different clustering methods perform this extension in different ways. For instance, in thesingle-linkage clusteringmethod, the distance between two clusters is defined to be the minimum distance between any two points from each cluster. Given this distance between clusters, a hierarchical clustering may be defined by agreedy algorithmthat initially places each point in its own single-point cluster and then repeatedly forms a new cluster by merging theclosest pairof clusters.[2]
The bottleneck of this greedy algorithm is the subproblem of finding which two clusters to merge in each step.
Known methods for repeatedly finding the closest pair of clusters in a dynamic set of clusters either require superlinear space to maintain adata structurethat can find closest pairs quickly, or they take greater than linear time to find each closest pair.[4][5]The nearest-neighbor chain algorithm uses a smaller amount of time and space than the greedy algorithm by merging pairs of clusters in a different order. In this way, it avoids the problem of repeatedly finding closest pairs. Nevertheless, for many types of clustering problem, it can be guaranteed to come up with the same hierarchical clustering as the greedy algorithm despite the different merge order.[2]
Intuitively, the nearest neighbor chain algorithm repeatedly follows a chain of clustersA→B→C→ ...where each cluster is the nearest neighbor of the previous one, until reaching a pair of clusters that are mutual nearest neighbors.[2]
In more detail, the algorithm performs the following steps:[2][6]
When it is possible for one cluster to have multiple equal nearest neighbors, then the algorithm requires a consistent tie-breaking rule. For instance, one may assign arbitrary index numbers to all of the clusters,
and then select (among the equal nearest neighbors) the one with the smallest index number. This rule prevents certain kinds of inconsistent behavior in the algorithm; for instance, without such a rule, the neighboring clusterDmight occur earlier in the stack than as the predecessor ofC.[7]
Each iteration of the loop performs a single search for the nearest neighbor of a cluster, and either adds one cluster to the stack or removes two clusters from it. Every cluster is only ever added once to the stack, because when it is removed again it is immediately made inactive and merged. There are a total of2n− 2clusters that ever get added to the stack:nsingle-point clusters in the initial set, andn− 2internal nodes other than the root in the binary tree representing the clustering. Therefore, the algorithm performs2n− 2pushing iterations andn− 1popping iterations.[2]
Each of these iterations may spend time scanning as many asn− 1inter-cluster distances to find the nearest neighbor.
The total number of distance calculations it makes is therefore less than3n2.
For the same reason, the total time used by the algorithm outside of these distance calculations isO(n2).[2]
Since the only data structure is the set of active clusters and the stack containing a subset of the active clusters, the space required is linear in the number of input points.[2]
For the algorithm to be correct, it must be the case that popping and merging the top two clusters from the algorithm's stack preserves the property that the remaining clusters on the stack form a chain of nearest neighbors.
Additionally, it should be the case that all of the clusters produced during the algorithm are the same as the clusters produced by agreedy algorithmthat always merges the closest two clusters, even though the greedy algorithm
will in general perform its merges in a different order than the nearest-neighbor chain algorithm. Both of these properties depend on the specific choice of how to measure the distance between clusters.[2]
The correctness of this algorithm relies on a property of its distance function calledreducibility. This property was identified byBruynooghe (1977)in connection with an earlier clustering method that used mutual nearest neighbor pairs but not chains of nearest neighbors.[8]A distance functiondon clusters is defined to be reducible if, for every three clustersA,BandCin the greedy hierarchical clustering such thatAandBare mutual nearest neighbors, the following inequality holds:[2]
If a distance function has the reducibility property, then merging two clustersCandDcan only cause the nearest neighbor ofEto change if that nearest neighbor was one ofCandD. This has two important consequences for the nearest neighbor chain algorithm. First, it can be shown using this property that, at each step of the algorithm, the clusters on the stackSform a valid chain of nearest neighbors, because whenever a nearest neighbor becomes invalidated it is immediately removed from the stack.[2]
Second, and even more importantly, it follows from this property that, if two clustersCandDboth belong to the greedy hierarchical clustering, and are mutual nearest neighbors at any point in time, then they will be merged by the greedy clustering, for they must remain mutual nearest neighbors until they are merged. It follows that each mutual nearest neighbor pair found by the nearest neighbor chain algorithm is also a pair of clusters found by the greedy algorithm, and therefore that the nearest neighbor chain algorithm computes exactly the same clustering (although in a different order) as the greedy algorithm.[2]
Ward's methodis an agglomerative clustering method in which the dissimilarity between two clustersAandBis measured by the amount by which merging the two clusters into a single larger cluster would increase the average squared distance of a point to its clustercentroid.[9]That is,
Expressed in terms of the centroidcA{\displaystyle c_{A}}andcardinalitynA{\displaystyle n_{A}}of the two clusters, it has the simpler formula
allowing it to be computed in constant time per distance calculation.
Although highly sensitive tooutliers, Ward's method is the most popular variation of agglomerative clustering both because of the round shape of the clusters it typically forms and because of its principled definition as the clustering that at each step has the smallest variance within its clusters.[10]Alternatively, this distance can be seen as the difference ink-means costbetween the new cluster and the two old clusters.
Ward's distance is also reducible, as can be seen more easily from a different formula for calculating the distance of a merged cluster from the distances of the clusters it was merged from:[9][11]
Distance update formulas such as this one are called formulas "of Lance–Williams type" after the work ofLance & Williams (1967).
Ifd(A,B){\displaystyle d(A,B)}is the smallest of the three distances on the right hand side (as would necessarily be true ifA{\displaystyle A}andB{\displaystyle B}are mutual nearest-neighbors) then the negative contribution from its term is cancelled by thenC{\displaystyle n_{C}}coefficient of one of the two other terms, leaving a positive value added to the weighted average of the other two distances. Therefore, the combined distance is always at least as large as the minimum ofd(A,C){\displaystyle d(A,C)}andd(B,C){\displaystyle d(B,C)}, meeting the definition of reducibility.
Because Ward's distance is reducible, the nearest-neighbor chain algorithm using Ward's distance calculates exactly the same clustering as the standard greedy algorithm. Fornpoints in aEuclidean spaceof constant dimension, it takes timeO(n2)and spaceO(n).[6]
Complete-linkageor furthest-neighbor clustering is a form of agglomerative clustering that defines the dissimilarity between clusters to be the maximum distance between any two points from the two clusters. Similarly, average-distance clustering uses the average pairwise distance as the dissimilarity. Like Ward's distance, these two forms of clustering obey a formula of Lance–Williams type. In complete linkage, the distanced(A∪B,C){\displaystyle d(A\cup B,C)}is the maximum of the two distancesd(A,C){\displaystyle d(A,C)}andd(B,C){\displaystyle d(B,C)}. Therefore, it is at least equal to the minimum of these two distances, the requirement for being reducible. For average distance,d(A∪B,C){\displaystyle d(A\cup B,C)}is just a weighted average of the distancesd(A,C){\displaystyle d(A,C)}andd(B,C){\displaystyle d(B,C)}. Again, this is at least as large as the minimum of the two distances. Thus, in both of these cases, the distance is reducible.[9][11]
Unlike Ward's method, these two forms of clustering do not have a constant-time method for computing distances between pairs of clusters. Instead it is possible to maintain an array of distances between all pairs of clusters. Whenever two clusters are merged, the formula can be used to compute the distance between the merged cluster and all other clusters. Maintaining this array over the course of the clustering algorithm takes time and spaceO(n2). The nearest-neighbor chain algorithm may be used in conjunction with this array of distances to find the same clustering as the greedy algorithm for these cases. Its total time and space, using this array, is alsoO(n2).[12]
The sameO(n2)time and space bounds can also be achieved in a different way,
by a technique that overlays aquadtree-based priority queue data structure on top of the distance matrix and uses it to perform the standard greedy clustering algorithm.
This quadtree method is more general, as it works even for clustering methods that are not reducible.[4]However, the nearest-neighbor chain algorithm matches its time and space bounds while using simpler data structures.[12]
Insingle-linkageor nearest-neighbor clustering, the oldest form of agglomerative hierarchical clustering,[11]the dissimilarity between clusters is measured as the minimum distance between any two points from the two clusters. With this dissimilarity,
meeting as an equality rather than an inequality the requirement of reducibility. (Single-linkage also obeys a Lance–Williams formula,[9][11]but with a negative coefficient from which it is more difficult to prove reducibility.)
As with complete linkage and average distance, the difficulty of calculating cluster distances causes the nearest-neighbor chain algorithm to take time and spaceO(n2)to compute the single-linkage clustering.
However, the single-linkage clustering can be found more efficiently by an alternative algorithm that computes theminimum spanning treeof the input distances usingPrim's algorithm, and then sorts the minimum spanning tree edges and uses this sorted list to guide the merger of pairs of clusters. Within Prim's algorithm, each successive minimum spanning tree edge can be found by asequential searchthrough an unsorted list of the smallest edges connecting the partially constructed tree to each additional vertex. This choice saves the time that the algorithm would otherwise spend adjusting the weights of vertices in itspriority queue. Using Prim's algorithm in this way would take timeO(n2)and spaceO(n), matching the best bounds that could be achieved with the nearest-neighbor chain algorithm for distances with constant-time calculations.[13]
Another distance measure commonly used in agglomerative clustering is the distance between the centroids of pairs of clusters, also known as the weighted group method.[9][11]It can be calculated easily in constant time per distance calculation. However, it is not reducible. For instance, if the input forms the set of three points of an equilateral triangle, merging two of these points into a larger cluster causes the inter-cluster distance to decrease, a violation of reducibility. Therefore, the nearest-neighbor chain algorithm will not necessarily find the same clustering as the greedy algorithm. Nevertheless,Murtagh (1983)writes that the nearest-neighbor chain algorithm provides "a good heuristic" for the centroid method.[2]A different algorithm byDay & Edelsbrunner (1984)can be used to find the greedy clustering inO(n2)time for this distance measure.[5]
The above presentation explicitly disallowed distances sensitive to merge order. Indeed, allowing such distances can cause problems. In particular, there exist order-sensitive cluster distances which satisfy reducibility, but for which the above algorithm will return a hierarchy with suboptimal costs. Therefore, when cluster distances are defined by a recursive formula (as some of the ones discussed above are), care must be taken that they do not use the hierarchy in a way which is sensitive to merge order.[14]
The nearest-neighbor chain algorithm was developed and implemented in 1982 byJean-Paul Benzécri[15]and J. Juan.[16]They based this algorithm on earlier methods that constructed hierarchical clusterings using mutual nearest neighbor pairs without taking advantage of nearest neighbor chains.[8][17]
|
https://en.wikipedia.org/wiki/Nearest-neighbor_chain_algorithm
|
Numerical taxonomyis aclassification systemin biologicalsystematicswhich deals with the grouping bynumerical methodsoftaxonomic unitsbased on their character states.[1]It aims to create ataxonomyusing numeric algorithms likecluster analysisrather than using subjective evaluation of their properties. The concept was first developed byRobert R. SokalandPeter H. A. Sneathin 1963[2]and later elaborated by the same authors.[3]They divided the field intopheneticsin which classifications are formed based on the patterns of overall similarities andcladisticsin which classifications are based on the branching patterns of the estimated evolutionary history of the taxa.In recent years many authors treat numerical taxonomy and phenetics as synonyms despite the distinctions made by those authors.[citation needed]
Although intended as an objective method, in practice the choice and implicit or explicitweightingof characteristics is influenced by available data and research interests of the investigator. What was made objective was the introduction of explicit steps to be used to createdendrogramsandcladogramsusing numerical methods rather than subjective synthesis of data.
|
https://en.wikipedia.org/wiki/Numerical_taxonomy
|
Ordering points to identify the clustering structure(OPTICS) is an algorithm for finding density-based[1]clustersin spatial data. It was presented in 1999 by Mihael Ankerst, Markus M. Breunig,Hans-Peter Kriegeland Jörg Sander.[2]Its basic idea is similar toDBSCAN,[3]but it addresses one of DBSCAN's major weaknesses: the problem of detecting meaningful clusters in data of varying density. To do so, the points of the database are (linearly) ordered such that spatially closest points become neighbors in the ordering. Additionally, a special distance is stored for each point that represents the density that must be accepted for a cluster so that both points belong to the same cluster. This is represented as adendrogram.
LikeDBSCAN, OPTICS requires two parameters:ε, which describes the maximum distance (radius) to consider, andMinPts, describing the number of points required to form a cluster. A pointpis acore pointif at leastMinPtspoints are found within itsε-neighborhoodNε(p){\displaystyle N_{\varepsilon }(p)}(including pointpitself). In contrast toDBSCAN, OPTICS also considers points that are part of a more densely packed cluster, so each point is assigned acore distancethat describes the distance to theMinPtsth closest point:
Thereachability-distanceof another pointofrom a pointpis either the distance betweenoandp, or the core distance ofp, whichever is bigger:
Ifpandoare nearest neighbors, this is theε′<ε{\displaystyle \varepsilon '<\varepsilon }we need to assume to havepandobelong to the same cluster.
Both core-distance and reachability-distance are undefined if no sufficiently dense cluster (w.r.t.ε) is available. Given a sufficiently largeε, this never happens, but then everyε-neighborhood query returns the entire database, resulting inO(n2){\displaystyle O(n^{2})}runtime. Hence, theεparameter is required to cut off the density of clusters that are no longer interesting, and to speed up the algorithm.
The parameterεis, strictly speaking, not necessary. It can simply be set to the maximum possible value. When a spatial index is available, however, it does play a practical role with regards to complexity. OPTICS abstracts from DBSCAN by removing this parameter, at least to the extent of only having to give the maximum value.
The basic approach of OPTICS is similar toDBSCAN, but instead of maintaining known, but so far unprocessed cluster members in a set, they are maintained in apriority queue(e.g. using an indexedheap).
In update(), the priority queue Seeds is updated with theε{\displaystyle \varepsilon }-neighborhood ofp{\displaystyle p}andq{\displaystyle q}, respectively:
OPTICS hence outputs the points in a particular ordering, annotated with their smallest reachability distance (in the original algorithm, the core distance is also exported, but this is not required for further processing).
Using areachability-plot(a special kind ofdendrogram), the hierarchical structure of the clusters can be obtained easily. It is a 2D plot, with the ordering of the points as processed by OPTICS on the x-axis and the reachability distance on the y-axis. Since points belonging to a cluster have a low reachability distance to their nearest neighbor, the clusters show up as valleys in the reachability plot. The deeper the valley, the denser the cluster.
The image above illustrates this concept. In its upper left area, a synthetic example data set is shown. The upper right part visualizes thespanning treeproduced by OPTICS, and the lower part shows the reachability plot as computed by OPTICS. Colors in this plot are labels, and not computed by the algorithm; but it is well visible how the valleys in the plot correspond to the clusters in above data set. The yellow points in this image are considered noise, and no valley is found in their reachability plot. They are usually not assigned to clusters, except the omnipresent "all data" cluster in a hierarchical result.
Extracting clusters from this plot can be done manually by selecting ranges on the x-axis after visual inspection, by selecting a threshold on the y-axis (the result is then similar to a DBSCAN clustering result with the sameε{\displaystyle \varepsilon }and minPts parameters; here a value of 0.1 may yield good results), or by different algorithms that try to detect the valleys by steepness, knee detection, or local maxima. A range of the plot beginning with a steep descent and ending with a steep ascent is considered a valley, and corresponds to a contiguous area of high density. Additional care must be taken to the last points in a valley to assign them to the inner or outer cluster, this can be achieved by considering the predecessor.[4]Clusterings obtained this way usually arehierarchical, and cannot be achieved by a single DBSCAN run.
LikeDBSCAN, OPTICS processes each point once, and performs oneε{\displaystyle \varepsilon }-neighborhood queryduring this processing. Given aspatial indexthat grants a neighborhood query inO(logn){\displaystyle O(\log n)}runtime, an overall runtime ofO(n⋅logn){\displaystyle O(n\cdot \log n)}is obtained. The worst case however isO(n2){\displaystyle O(n^{2})}, as with DBSCAN. The authors of the original OPTICS paper report an actual constant slowdown factor of 1.6 compared to DBSCAN. Note that the value ofε{\displaystyle \varepsilon }might heavily influence the cost of the algorithm, since a value too large might raise the cost of a neighborhood query to linear complexity.
In particular, choosingε>maxx,yd(x,y){\displaystyle \varepsilon >\max _{x,y}d(x,y)}(larger than the maximum distance in the data set) is possible, but leads to quadratic complexity, since every neighborhood query returns the full data set. Even when no spatial index is available, this comes at additional cost in managing the heap. Therefore,ε{\displaystyle \varepsilon }should be chosen appropriately for the data set.
OPTICS-OF[5]is anoutlier detectionalgorithm based on OPTICS. The main use is the extraction of outliers from an existing run of OPTICS at low cost compared to using a different outlier detection method. The better known versionLOFis based on the same concepts.
DeLi-Clu,[6]Density-Link-Clustering combines ideas fromsingle-linkage clusteringand OPTICS, eliminating theε{\displaystyle \varepsilon }parameter and offering performance improvements over OPTICS.
HiSC[7]is a hierarchicalsubspace clustering(axis-parallel) method based on OPTICS.
HiCO[8]is a hierarchicalcorrelation clusteringalgorithm based on OPTICS.
DiSH[9]is an improvement over HiSC that can find more complex hierarchies.
FOPTICS[10]is a faster implementation using random projections.
HDBSCAN*[11]is based on a refinement of DBSCAN, excluding border-points from the clusters and thus following more strictly the basic definition of density-levels by Hartigan.[12]
Java implementations of OPTICS, OPTICS-OF, DeLi-Clu, HiSC, HiCO and DiSH are available in theELKI data mining framework(with index acceleration for several distance functions, and with automatic cluster extraction using the ξ extraction method). Other Java implementations include theWekaextension (no support for ξ cluster extraction).
TheRpackage "dbscan" includes a C++ implementation of OPTICS (with both traditional dbscan-like and ξ cluster extraction) using ak-d treefor index acceleration for Euclidean distance only.
Python implementations of OPTICS are available in thePyClusteringlibrary and inscikit-learn. HDBSCAN* is available in thehdbscanlibrary.
|
https://en.wikipedia.org/wiki/OPTICS_algorithm
|
Intopological data analysis,persistent homologyis a method for computing topological features of a space at different spatial resolutions. More persistent features are detected over a wide range of spatial scales and are deemed more likely to represent true features of the underlying space rather than artifacts of sampling, noise, or particular choice of parameters.[1]
To find the persistent homology of a space, the space must first be represented as asimplicial complex. A distance function on the underlying space corresponds to afiltrationof the simplicial complex, that is a nested sequence of increasing subsets. One common method of doing this is via taking the sublevel filtration of the distance to apoint cloud, or equivalently, theoffset filtrationon the point cloud and taking itsnervein order to get the simplicial filtration known asČechfiltration.[2]A similar construction uses a nested sequence ofVietoris–Rips complexesknown as theVietoris–Rips filtration.[3]
Formally, consider a real-valued function on a simplicial complexf:K→R{\displaystyle f:K\rightarrow \mathbb {R} }that is non-decreasing on increasing sequences of faces, sof(σ)≤f(τ){\displaystyle f(\sigma )\leq f(\tau )}wheneverσ{\displaystyle \sigma }is a face ofτ{\displaystyle \tau }inK{\displaystyle K}. Then for everya∈R{\displaystyle a\in \mathbb {R} }thesublevel setKa=f−1((−∞,a]){\displaystyle K_{a}=f^{-1}((-\infty ,a])}is a subcomplex ofK, and the ordering of the values off{\displaystyle f}on the simplices inK{\displaystyle K}(which is in practice always finite) induces an ordering on the sublevel complexes that defines a filtration
When0≤i≤j≤n{\displaystyle 0\leq i\leq j\leq n}, the inclusionKi↪Kj{\displaystyle K_{i}\hookrightarrow K_{j}}induces ahomomorphismfpi,j:Hp(Ki)→Hp(Kj){\displaystyle f_{p}^{i,j}:H_{p}(K_{i})\rightarrow H_{p}(K_{j})}on thesimplicial homologygroups for each dimensionp{\displaystyle p}. Thepth{\displaystyle p^{\text{th}}}persistent homology groupsare the images of these homomorphisms, and thepth{\displaystyle p^{\text{th}}}persistent Betti numbersβpi,j{\displaystyle \beta _{p}^{i,j}}are theranksof those groups.[4]Persistent Betti numbers forp=0{\displaystyle p=0}coincide with
thesize function, a predecessor of persistent homology.[5]
Any filtered complex over a fieldF{\displaystyle F}can be brought by a linear transformation preserving the filtration to so calledcanonical form, a canonically defined direct sum of filtered complexes of two types: one-dimensional complexes with trivial differentiald(eti)=0{\displaystyle d(e_{t_{i}})=0}and two-dimensional complexes with trivial homologyd(esj+rj)=erj{\displaystyle d(e_{s_{j}+r_{j}})=e_{r_{j}}}.[6]
Apersistence moduleover apartially orderedsetP{\displaystyle P}is a set of vector spacesUt{\displaystyle U_{t}}indexed byP{\displaystyle P}, with a linear maputs:Us→Ut{\displaystyle u_{t}^{s}:U_{s}\to U_{t}}whenevers≤t{\displaystyle s\leq t}, withutt{\displaystyle u_{t}^{t}}equal to the identity anduts∘usr=utr{\displaystyle u_{t}^{s}\circ u_{s}^{r}=u_{t}^{r}}forr≤s≤t{\displaystyle r\leq s\leq t}. Equivalently, we may consider it as afunctorfromP{\displaystyle P}considered as a category to the category of vector spaces (orR{\displaystyle R}-modules). There is a classification of persistence modules over a fieldF{\displaystyle F}indexed byN{\displaystyle \mathbb {N} }:U≃⨁ixti⋅F[x]⊕(⨁jxrj⋅(F[x]/(xsj⋅F[x]))).{\displaystyle U\simeq \bigoplus _{i}x^{t_{i}}\cdot F[x]\oplus \left(\bigoplus _{j}x^{r_{j}}\cdot (F[x]/(x^{s_{j}}\cdot F[x]))\right).}Multiplication byx{\displaystyle x}corresponds to moving forward one step in the persistence module. Intuitively, the free parts on the right side correspond to the homology generators that appear at filtration levelti{\displaystyle t_{i}}and never disappear, while the torsion parts correspond to those that appear at filtration levelrj{\displaystyle r_{j}}and last forsj{\displaystyle s_{j}}steps of the filtration (or equivalently, disappear at filtration levelsj+rj{\displaystyle s_{j}+r_{j}}).[7][6]
Each of these two theorems allows us to uniquely represent the persistent homology of a filtered simplicial complex with apersistence barcodeorpersistence diagram. A barcode represents each persistent generator with a horizontal line beginning at the first filtration level where it appears, and ending at the filtration level where it disappears, while a persistence diagram plots a point for each generator with its x-coordinate the birth time and its y-coordinate the death time.
Equivalently the same data is represented by Barannikov'scanonical form,[6]where each generator is represented by a segment connecting the birth and the death values plotted on separate lines for eachp{\displaystyle p}.
Persistent homology is stable in a precise sense, which provides robustness against noise. Thebottleneck distanceis a natural metric on the space of persistence diagrams given byW∞(X,Y):=infφ:X→Ysupx∈X‖x−φ(x)‖∞,{\displaystyle W_{\infty }(X,Y):=\inf _{\varphi :X\to Y}\sup _{x\in X}\Vert x-\varphi (x)\Vert _{\infty },}whereφ{\displaystyle \varphi }ranges over bijections. A small perturbation in the input filtration leads to a small perturbation of its persistence diagram in the bottleneck distance. For concreteness, consider a filtration on a spaceX{\displaystyle X}homeomorphic to a simplicial complex determined by the sublevel sets of a continuous tame functionf:X→R{\displaystyle f:X\to \mathbb {R} }. The mapD{\displaystyle D}takingf{\displaystyle f}to the persistence diagram of itsk{\displaystyle k}th homology is 1-Lipschitzwith respect to thesup{\displaystyle \sup }-metric on functions and the bottleneck distance on persistence diagrams.
That is,W∞(D(f),D(g))≤‖f−g‖∞{\displaystyle W_{\infty }(D(f),D(g))\leq \lVert f-g\rVert _{\infty }}.[8]
The principal algorithm is based on the bringing of the filtered complex to itscanonical formby upper-triangular matrices and runs in worst-case cubical complexity in the number of simplices.[6]The fastest known algorithm for computing persistent homology runs in matrix multiplication time.[9]
Since the number of simplices is highly relevant for computation time, finding filtered simplicial complexes with few simplexes is an active research area. Several approaches have been proposed to reduce the number of simplices in a filtered simplicial complex in order to approximate persistent homology.[10][11][12][13]
There are various software packages for computing persistence intervals of a finite filtration.[14]
|
https://en.wikipedia.org/wiki/Persistent_homology
|
In social sciences,sequence analysis (SA)is concerned with the analysis of sets of categorical sequences that typically describelongitudinal data. Analyzed sequences are encoded representations of, for example, individual life trajectories such as family formation, school to work transitions, working careers, but they may also describe daily or weekly time use or represent the evolution of observed or self-reported health, of political behaviors, or the development stages of organizations. Such sequences are chronologically ordered unlike words or DNA sequences for example.
SA is a longitudinal analysis approach that is holistic in the sense that it considers each sequence as a whole. SA is essentially exploratory. Broadly, SA provides a comprehensible overall picture of sets of sequences with the objective of characterizing the structure of the set of sequences, finding the salient characteristics of groups, identifying typical paths, comparing groups, and more generally studying how the sequences are related to covariates such as sex, birth cohort, or social origin.
Introduced in the social sciences in the 80s byAndrew Abbott,[1][2]SA has gained much popularity after the release of dedicated software such as the SQ[3]and SADI[4]addons forStataand theTraMineRRpackage[5]with its companions TraMineRextras[6]and WeightedCluster.[7]
Despite some connections, the aims and methods of SA in social sciences strongly differ from those ofsequence analysis in bioinformatics.
Sequence analysis methods were first imported into the social sciences from the information and biological sciences (seeSequence alignment) by theUniversity of ChicagosociologistAndrew Abbottin the 1980s, and they have since developed in ways that are unique to the social sciences.[8]Scholars inpsychology,economics,anthropology,demography,communication,political science,learning sciences, organizational studies, and especiallysociologyhave been using sequence methods ever since.
In sociology, sequence techniques are most commonly employed in studies of patterns of life-course development, cycles, and life histories.[9][10][11][12]There has been a great deal of work on the sequential development of careers,[13][14][15]and there is increasing interest in how career trajectories intertwine with life-course sequences.[16][17]Many scholars have used sequence techniques to model how work and family activities are linked in household divisions of labor and the problem of schedule synchronization within families.[18][19][20]The study of interaction patterns is increasingly centered on sequential concepts, such as turn-taking, the predominance of reciprocal utterances, and the strategic solicitation of preferred types of responses (seeConversation Analysis). Social network analysts (seeSocial network analysis) have begun to turn to sequence methods and concepts to understand how social contacts and activities are enacted in real time,[21][22]and to model and depict how whole networks evolve.[23]Social network epidemiologists have begun to examine social contact sequencing to better understand the spread of disease.[24]Psychologists have used those methods to study how the order of information affects learning, and to identify structure in interactions between individuals (seeSequence learning).
Many of the methodological developments in sequence analysis came on the heels of a special section devoted to the topic in a 2000 issue[10]ofSociological Methods & Research, which hosted a debate over the use of theoptimal matching(OM) edit distance for comparing sequences. In particular, sociologists objected to the descriptive and data-reducing orientation ofoptimal matching, as well as to a lack of fit between bioinformatic sequence methods and uniquely social phenomena.[25][26]The debate has given rise to several methodological innovations (seePairwise dissimilaritiesbelow) that address limitations of early sequence comparison methods developed in the 20th century. In 2006,David Starkand Balazs Vedres[23]proposed the term "social sequence analysis" to distinguish the approach from bioinformaticsequence analysis. However, if we except the nice book byBenjamin Cornwell,[27]the term was seldom used, probably because the context prevents any confusion in the SA literature.Sociological Methods & Researchorganized a special issue on sequence analysis in 2010, leading to what Aisenbrey and Fasang[28]referred to as the "second wave of sequence analysis", which mainly extended optimal matching and introduced other techniques to compare sequences. Alongside sequence comparison, recent advances in SA concerned among others the visualization of sets of sequence data,[5][29]the measure and analysis of the discrepancy of sequences,[30]the identification ofrepresentative sequences,[31]and the development of summary indicators of individual sequences.[32]Raab and Struffolino[33]have conceived more recent advances as the third wave of sequence analysis. This wave is largely characterized by the effort of bringing together the stochastic and the algorithmic modeling culture[34]by jointly applying SA with more established methods such asanalysis of variance,event history analysis,Markovian modeling,social networkanalysis, orcausal analysisandstatistical modelingin general.[35][36][37][27][30][38][39]
The analysis of sequence patterns has foundations in sociological theories that emerged in the middle of the 20th century.[27]Structural theorists argued that society is a system that is characterized by regular patterns. Even seemingly trivial social phenomena are ordered in highly predictable ways.[40]This idea serves as an implicit motivation behind social sequence analysts' use of optimal matching, clustering, and related methods to identify common "classes" of sequences at all levels of social organization, a form of pattern search. This focus on regularized patterns of social action has become an increasingly influential framework for understanding microsocial interaction and contact sequences, or "microsequences."[41]This is closely related toAnthony Giddens's theory ofstructuration, which holds that social actors' behaviors are predominantly structured by routines, and which in turn provides predictability and a sense of stability in an otherwise chaotic and rapidly moving social world.[42]This idea is also echoed inPierre Bourdieu'sconcept ofhabitus, which emphasizes the emergence and influence of stable worldviews in guiding everyday action and thus produce predictable, orderly sequences of behavior.[43]The resulting influence of routine as a structuring influence on social phenomena was first illustrated empirically byPitirim Sorokin, who led a 1939 study that found that daily life is so routinized that a given person is able to predict with about 75% accuracy how much time they will spend doing certain things the following day.[44]Talcott Parsons's argument[40]that all social actors are mutually oriented to their larger social systems (for example, their family and larger community) throughsocial rolesalso underlies social sequence analysts' interest in the linkages that exist between different social actors' schedules and ordered experiences, which has given rise to a considerable body of work onsynchronizationbetween social actors and their social contacts and larger communities.[19][18][45]All of these theoretical orientations together warrant critiques of thegeneral linear modelof social reality, which as applied in most work implies that society is either static or that it is highly stochastic in a manner that conforms toMarkovprocesses[1][46]This concern inspired the initial framing of social sequence analysis as an antidote to general linear models. It has also motivated recent attempts to model sequences of activities or events in terms as elements that link social actors in non-linear network structures[47][48]This work, in turn, is rooted inGeorg Simmel'stheory that experiencing similar activities, experiences, and statuses serves as a link between social actors.[49][50]
In demography and historical demography, from the 1980s the rapid appropriation of the life course perspective and methods was part of a substantive paradigmatic change that implied a stronger embedment of demographic processes into social sciences dynamics. After a first phase with a focus on the occurrence and timing of demographic events studied separately from each other with a hypothetico-deductive approach, from the early 2000s[34][51]the need to consider the structure of the life courses and to make justice to its complexity led to a growing use of sequence analysis with the aim of pursuing a holistic approach. At an inter-individual level,pairwise dissimilaritiesand clustering appeared as the appropriate tools for revealing the heterogeneity in human development. For example, the meta-narrations contrasting individualized Western societies with collectivist societies in the South (especially in Asia) were challenged by comparative studies revealing the diversity of pathways to legitimate reproduction.[52]At an intra-individual level, sequence analysis integrates the basic life course principle that individuals interpret and make decision about their life according to their past experiences and their perception of contingencies.[34]The interest for this perspective was also promoted by the changes in individuals' life courses for cohorts born between the beginning and the end of the 20th century. These changes have been described as de-standardization, de-synchronization, de-institutionalization.[53]Among the drivers of these dynamics, the transition to adulthood is key:[54]for more recent birth cohorts this crucial phase along individual life courses implied a larger number of events and lengths of the state spells experienced. For example, many postponed leaving parental home and the transition to parenthood, in some context cohabitation replaced marriage as long-lasting living arrangement, and the birth of the first child occurs more frequently while parents cohabit instead of within a wedlock.[55]Such complexity required to be measured to be able to compare quantitative indicators across birth cohorts[11][56](see[57]for an extension of this questioning to populations from low- and medium income countries). The demography's old ambition to develop a 'family demography' has found in the sequence analysis a powerful tool to address research questions at the cross-road with other disciplines: for example, multichannel techniques[58]represent precious opportunities to deal with the issue of compatibility between working and family lives.[59][37]Similarly, more recent combinations of sequence analysis and event history analysis have been developed (see[36]for a review) and can be applied, for instance, for understanding of the link between demographic transitions and health.
The analysis of temporal processes in the domain of political sciences[60]regards how institutions, that is, systems and organizations (regimes, governments, parties, courts, etc.) that crystallize political interactions, formalize legal constraints and impose a degree of stability or inertia. Special importance is given to, first, the role of contexts, which confer meaning to trends and events, while shared contexts offer shared meanings; second, to changes over time in power relationships, and, subsequently, asymmetries, hierarchies, contention, or conflict; and, finally, to historical events that are able to shape trajectories, such as elections, accidents, inaugural speeches, treaties, revolutions, or ceasefires. Empirically, political sequences' unit of analysis can be individuals, organizations, movements, or institutional processes. Depending on the unit of analysis, the sample sizes may be limited few cases (e.g., regions in a country when considering the turnover of local political parties over time) or include a few hundreds (e.g., individuals' voting patterns). Three broad kinds of political sequences may be distinguished. The first and most common iscareers,that is, formal, mostly hierarchical positions along which individuals progress in institutional environments, such as parliaments, cabinets, administrations, parties, unions or business organizations.[61][62][63]We may nametrajectoriespolitical sequences that develop in more informal and fluid contexts, such as activists evolving across various causes and social movements,[64][65]or voters navigating a political and ideological landscape across successive polls.[66]Finally,processesrelate to non-individual entities, such as: public policies developing through successive policy stages across distinct arenas;[67]sequences of symbolic or concrete interactions between national and international actors in diplomatic and military contexts;[68][69]and development of organizations or institutions, such as pathways of countries towards democracy (Wilson 2014).[70]
Asequencesis an ordered list of elements (s1,s2,...,sl) taken from a finite alphabetA. For a set S of sequences, three sizes matter: the numbernof sequences, the sizea= |A| of the alphabet, and the lengthlof the sequences (that could be different for each sequence). In social sciences,nis generally something between a few hundreds and a few thousands, the alphabet size remains limited (most often less than 20), while sequence length rarely exceeds 100.
We may distinguish betweenstate sequencesandevent sequences,[71]where states last while events occur at one time point and do not last but contribute possibly together with other events to state changes. For instance, the joint occurrence of the two events leaving home and starting a union provoke a state change from 'living at home with parents' to 'living with a partner'.
When a state sequence is represented as the list of states observed at the successive time points, the position of each element in the sequence conveys this time information and the distance between positions reflects duration. An alternative more compact representation of a sequence, is the list of the successive spells stamped with their duration, where aspell(also calledepisode) is asubstringin a same state. For example, inaabbbc,bbbis a spell of length 3 in stateb, and the whole sequence can be represented as (a,2)-(b,3)-(c,1).[71]
A crucial point when looking at state sequences is the timing scheme used to time align the sequences. This could be the historical calendar time, or a process time such as age, i.e. time since birth.
In event sequences, positions do not convey any time information. Therefore event occurrence time must be explicitly provided (as a timestamp) when it matters.
SA is essentially concerned with state sequences.
Conventional SA consists essentially in building a typology of the observed trajectories. Abbott and Tsay (2000)[10]describe this typical SA as a three-step program: 1. Coding individual narratives as sequences of states; 2. Measuring pairwise dissimilarities between sequences; and 3. Clustering the sequences from the pairwise dissimilarities. However, SA is much more (see e.g.[35][8]) and encompasses also among others the description and visual rendering of sets of sequences, ANOVA-like analysis and regression trees for sequences, the identification of representative sequences, the study of the relationship between linked sequences (e.g. dyadic, linked-lives, or various life dimensions such as occupation, family, health), and sequence-network.
Given an alignment rule, a set of sequences can be represented in tabular form with sequences in rows and columns corresponding to the positions in the sequences.
To describe such data, we may look at the columns and consider the cross-sectional state distributions at the successive positions.
Thechronogramordensity plotof a set of sequences renders these successive cross-sectional distributions.
For each (column) distribution we can compute characteristics such as entropy or modal state and look at how these values evolve over the positions (see[5]pp 18–21).
Alternatively, we can look at the rows. Theindex plot[73]where each sequence is represented as a horizontal stacked bar or line is the basic plot for rendering individual sequences.
We can compute characteristics of the individual sequences and examine the cross-sectional distribution of these characteristics.
Main indicators of individual sequences[32]
State sequences can nicely be rendered graphically and such plots prove useful for interpretation purposes. As shown above, the two basic plots are the index plot that renders individual sequences and the chronogram that renders the evolution of the cross-sectional state distribution along the timeframe. Chronograms (also known as status proportion plot or state distribution plot) completely overlook the diversity of the sequences, while index plots are often too scattered to be readable. Relative frequency plots and plots of representative sequences attempt to increase the readability of index plots without falling in the oversimplification of a chronogram. In addition, there are many plots that focus on specific characteristics of the sequences. Below is a list of plots that have been proposed in the literature for rendering large sets of sequences. For each plot, we give examples of software (details in sectionSoftware) that produce it.
Pairwise dissimilarities between sequences serve to compare sequences and many advanced SA methods are based on these dissimilarities. The most popular dissimilarity measure isoptimal matching(OM), i.e. the minimal cost of transforming one sequence into the other by means of indel (insert or delete) and substitution operations with possibly costs of these elementary operations depending on the states involved. SA is so intimately linked with OM that it is sometimes named optimal matching analysis (OMA).
There are roughly three categories of dissimilarity measures:[86]
Pairwise dissimilarities between sequences give access to a series of techniques to discover holistic structuring characteristics of the sequence data. In particular, dissimilarities between sequences can serve as input to cluster algorithms and multidimensional scaling, but also allow to identify medoids or other representative sequences, define neighborhoods, measure the discrepancy of a set of sequences, proceed to ANOVA-like analyses, and grow regression trees.
Although dissimilarity-based methods play a central role in social SA, essentially because of their ability to preserve the holistic perspective, several other approaches also prove useful for analyzing sequence data.
Some recent advances can be conceived as thethird wave of SA.[33]This wave is largely characterized by the effort of bringing together the stochastic and the algorithmic modeling culture by jointly applying SA with more established methods such as analysis of variance, event history, network analysis, or causal analysis and statistical modeling in general. Some examples are given below; see also "Other methods of analysis".
Although SA witnesses a steady inflow of methodological contributions that address the issues raised two decades ago,[28]some pressing open issues remain.[36]Among the most challenging, we can mention:
Up-to-date information on advances, methodological discussions, and recent relevant publications can be found on the Sequence Analysis Associationwebpage.
These techniques have proved valuable in a variety of contexts. In life-course research, for example, research has shown that retirement plans are affected not just by the last year or two of one's life, but instead how one's work and family careers unfolded over a period of several decades. People who followed an "orderly" career path (characterized by consistent employment and gradual ladder-climbing within a single organization) retired earlier than others, including people who had intermittent careers, those who entered the labor force late, as well as those who enjoyed regular employment but who made numerous lateral moves across organizations throughout their careers.[12]In the field ofeconomic sociology, research has shown that firm performance depends not just on a firm's current or recent social network connectedness, but also the durability or stability of their connections to other firms. Firms that have more "durably cohesive" ownership network structures attract more foreign investment than less stable or poorly connected structures.[23]Research has also used data on everyday work activity sequences to identify classes of work schedules, finding that the timing of work during the day significantly affects workers' abilities to maintain connections with the broader community, such as through community events.[19]More recently, social sequence analysis has been proposed as a meaningful approach to study trajectories in the domain of creative enterprise, allowing the comparison among the idiosyncrasies of unique creative careers.[131]While other methods for constructing and analyzing whole sequence structure have been developed during the past three decades, including event structure analysis,[118][119]OM and other sequence comparison methods form the backbone of research on whole sequence structures.
Some examples of application include:
Sociology
Demography and historical demography
Political sciences
Education and learning sciences
Psychology
Medical research
Survey methodology
Geography
Two main statistical computing environment offer tools to conduct a sequence analysis in the form of user-written packages: Stata and R.
The first international conference dedicated to social-scientific research that uses sequence analysis methods – the Lausanne Conference on Sequence Analysis, orLaCOSA– was held in Lausanne, Switzerland in June 2012.[159]A second conference (LaCOSA II) was held in Lausanne in June 2016.[160][161]TheSequence Analysis Association(SAA) was founded at the International Symposium on Sequence Analysis and Related Methods, in October 2018 at Monte Verità, TI, Switzerland. The SAA is an international organization whose goal is to organize events such as symposia and training courses and related events, and to facilitate scholars' access to sequence analysis resources.
|
https://en.wikipedia.org/wiki/Social_sequence_analysis
|
In the field ofmultivariate statistics,kernel principal component analysis (kernel PCA)[1]is an extension ofprincipal component analysis(PCA) using techniques ofkernel methods. Using a kernel, the originally linear operations of PCA are performed in areproducing kernel Hilbert space.
Recall that conventional PCA operates on zero-centered data; that is,
wherexi{\displaystyle \mathbf {x} _{i}}is one of theN{\displaystyle N}multivariate observations.
It operates by diagonalizing thecovariance matrix,
in other words, it gives aneigendecompositionof the covariance matrix:
which can be rewritten as
(See also:Covariance matrix as a linear operator)
To understand the utility of kernel PCA, particularly for clustering, observe that, whileNpoints cannot, in general, belinearly separatedind<N{\displaystyle d<N}dimensions, they canalmost alwaysbe linearly separated ind≥N{\displaystyle d\geq N}dimensions. That is, givenNpoints,xi{\displaystyle \mathbf {x} _{i}}, if we map them to anN-dimensional space with
it is easy to construct ahyperplanethat divides the points into arbitrary clusters. Of course, thisΦ{\displaystyle \Phi }creates linearly independent vectors, so there is no covariance on which to perform eigendecompositionexplicitlyas we would in linear PCA.
Instead, in kernel PCA, a non-trivial, arbitraryΦ{\displaystyle \Phi }function is 'chosen' that is never calculated explicitly, allowing the possibility to use very-high-dimensionalΦ{\displaystyle \Phi }'s if we never have to actually evaluate the data in that space. Since we generally try to avoid working in theΦ{\displaystyle \Phi }-space, which we will call the 'feature space', we can create the N-by-N kernel
which represents the inner product space (seeGramian matrix) of the otherwise intractable feature space. The dual form that arises in the creation of a kernel allows us to mathematically formulate a version of PCA in which we never actually solve the eigenvectors and eigenvalues of the covariance matrix in theΦ(x){\displaystyle \Phi (\mathbf {x} )}-space (seeKernel trick). The N-elements in each column ofKrepresent thedot productof one point of the transformed data with respect to all the transformed points (N points). Some well-known kernels are shown in the example below.
Because we are never working directly in the feature space, the kernel-formulation of PCA is restricted in that it computes not the principal components themselves, but the projections of our data onto those components. To evaluate the projection from a point in the feature spaceΦ(x){\displaystyle \Phi (\mathbf {x} )}onto the kth principal componentVk{\displaystyle V^{k}}(where superscript k means the component k, not powers of k)
We note thatΦ(xi)TΦ(x){\displaystyle \Phi (\mathbf {x} _{i})^{T}\Phi (\mathbf {x} )}denotes dot product, which is simply the elements of the kernelK{\displaystyle K}. It seems all that's left is to calculate and normalize theaik{\displaystyle \mathbf {a} _{i}^{k}}, which can be done by solving the eigenvector equation
whereN{\displaystyle N}is the number of data points in the set, andλ{\displaystyle \lambda }anda{\displaystyle \mathbf {a} }are the eigenvalues and eigenvectors ofK{\displaystyle K}. Then to normalize the eigenvectorsak{\displaystyle \mathbf {a} ^{k}}, we require that
Care must be taken regarding the fact that, whether or notx{\displaystyle x}has zero-mean in its original space, it is not guaranteed to be centered in the feature space (which we never compute explicitly). Since centered data is required to perform an effective principal component analysis, we 'centralize'K{\displaystyle K}to becomeK′{\displaystyle K'}
where1N{\displaystyle \mathbf {1_{N}} }denotes a N-by-N matrix for which each element takes value1/N{\displaystyle 1/N}. We useK′{\displaystyle K'}to perform the kernel PCA algorithm described above.
One caveat of kernel PCA should be illustrated here. In linear PCA, we can use the eigenvalues to rank the eigenvectors based on how much of the variation of the data is captured by each principal component. This is useful for data dimensionality reduction and it could also be applied to KPCA. However, in practice there are cases that all variations of the data are same. This is typically caused by a wrong choice of kernel scale.
In practice, a large data set leads to a large K, and storing K may become a problem. One way to deal with this is to perform clustering on the dataset, and populate the kernel with the means of those clusters. Since even this method may yield a relatively large K, it is common to compute only the top P eigenvalues and eigenvectors of the eigenvalues are calculated in this way.
Consider three concentric clouds of points (shown); we wish to use kernel PCA to identify these groups. The color of the points does not represent information involved in the algorithm, but only shows how the transformation relocates the data points.
First, consider the kernel
Applying this to kernel PCA yields the next image.
Now consider aGaussian kernel:
That is, this kernel is a measure of closeness, equal to 1 when the points coincide and equal to 0 at infinity.
Note in particular that the first principal component is enough to distinguish the three different groups, which is impossible using only linear PCA, because linear PCA operates only in the given (in this case two-dimensional) space, in which these concentric point clouds are not linearly separable.
Kernel PCA has been demonstrated to be useful for novelty detection[3]and image de-noising.[4]
|
https://en.wikipedia.org/wiki/Kernel_principal_component_analysis
|
Inmathematics,spectralgraph theoryis the study of the properties of agraphin relationship to thecharacteristic polynomial,eigenvalues, andeigenvectorsof matrices associated with the graph, such as itsadjacency matrixorLaplacian matrix.
The adjacency matrix of a simple undirected graph is arealsymmetric matrixand is thereforeorthogonally diagonalizable; its eigenvalues are realalgebraic integers.
While the adjacency matrix depends on the vertex labeling, itsspectrumis agraph invariant, although not a complete one.
Spectral graph theory is also concerned with graph parameters that are defined via multiplicities of eigenvalues of matrices associated to the graph, such as theColin de Verdière number.
Two graphs are calledcospectralorisospectralif the adjacency matrices of the graphs areisospectral, that is, if the adjacency matrices have equalmultisetsof eigenvalues.
Cospectral graphs need not beisomorphic, but isomorphic graphs are always cospectral.
A graphG{\displaystyle G}is said to be determined by its spectrum if any other graph with the same spectrum asG{\displaystyle G}is isomorphic toG{\displaystyle G}.
Some first examples of families of graphs that are determined by their spectrum include:
A pair of graphs are said to be cospectral mates if they have the same spectrum, but are non-isomorphic.
The smallest pair of cospectral mates is {K1,4,C4∪K1}, comprising the 5-vertexstarand thegraph unionof the 4-vertexcycleand the single-vertex graph.[1]The first example of cospectral graphs was reported by Collatz and Sinogowitz[2]in 1957.
The smallest pair ofpolyhedralcospectral mates areenneahedrawith eight vertices each.[3]
Almost alltreesare cospectral, i.e., as the number of vertices grows, the fraction of trees for which there exists a cospectral tree goes to 1.[4]
A pair ofregular graphsare cospectral if and only if their complements are cospectral.[5]
A pair ofdistance-regular graphsare cospectral if and only if they have the same intersection array.
Cospectral graphs can also be constructed by means of theSunada method.[6]
Another important source of cospectral graphs are the point-collinearity graphs and the line-intersection graphs ofpoint-line geometries. These graphs are always cospectral but are often non-isomorphic.[7]
The famousCheeger's inequalityfromRiemannian geometryhas a discrete analogue involving the Laplacian matrix; this is perhaps the most important theorem in spectral graph theory and one of the most useful facts in algorithmic applications. It approximates the sparsest cut of a graph through the second eigenvalue of its Laplacian.
TheCheeger constant(alsoCheeger numberorisoperimetric number) of agraphis a numerical measure of whether or not a graph has a "bottleneck". The Cheeger constant as a measure of "bottleneckedness" is of great interest in many areas: for example, constructing well-connectednetworks of computers,card shuffling, andlow-dimensional topology(in particular, the study ofhyperbolic3-manifolds).
More formally, the Cheeger constanth(G) of a graphGonnvertices is defined as
where the minimum is over all nonempty setsSof at mostn/2 vertices and ∂(S) is theedge boundaryofS, i.e., the set of edges with exactly one endpoint inS.[8]
When the graphGisd-regular, there is a relationship betweenh(G) and the spectral gapd− λ2ofG. An inequality due to Dodziuk[9]and independentlyAlonandMilman[10]states that[11]
This inequality is closely related to theCheeger boundforMarkov chainsand can be seen as a discrete version ofCheeger's inequalityinRiemannian geometry.
For general connected graphs that are not necessarily regular, an alternative inequality is given by Chung[12]: 35
whereλ{\displaystyle \lambda }is the least nontrivial eigenvalue of the normalized Laplacian, andh(G){\displaystyle {\mathbf {h} }(G)}is the (normalized) Cheeger constant
wherevol(Y){\displaystyle {\mathrm {vol} }(Y)}is the sum of degrees of vertices inY{\displaystyle Y}.
There is an eigenvalue bound forindependent setsinregular graphs, originally due toAlan J. Hoffmanand Philippe Delsarte.[13]
Suppose thatG{\displaystyle G}is ak{\displaystyle k}-regular graph onn{\displaystyle n}vertices with least eigenvalueλmin{\displaystyle \lambda _{\mathrm {min} }}. Then:α(G)≤n1−kλmin{\displaystyle \alpha (G)\leq {\frac {n}{1-{\frac {k}{\lambda _{\mathrm {min} }}}}}}whereα(G){\displaystyle \alpha (G)}denotes itsindependence number.
This bound has been applied to establish e.g. algebraic proofs of theErdős–Ko–Rado theoremand its analogue for intersecting families of subspaces overfinite fields.[14]
For general graphs which are not necessarily regular, a similar upper bound for the independence number can be derived by using the maximum eigenvalueλmax′{\displaystyle \lambda '_{max}}of the normalized Laplacian[12]ofG{\displaystyle G}:α(G)≤n(1−1λmax′)maxdegmindeg{\displaystyle \alpha (G)\leq n(1-{\frac {1}{\lambda '_{\mathrm {max} }}}){\frac {\mathrm {maxdeg} }{\mathrm {mindeg} }}}wheremaxdeg{\displaystyle {\mathrm {maxdeg} }}andmindeg{\displaystyle {\mathrm {mindeg} }}denote the maximum and minimum degree inG{\displaystyle G}, respectively. This a consequence of a more general inequality (pp. 109 in[12]):vol(X)≤(1−1λmax′)vol(V(G)){\displaystyle {\mathrm {vol} }(X)\leq (1-{\frac {1}{\lambda '_{\mathrm {max} }}}){\mathrm {vol} }(V(G))}whereX{\displaystyle X}is an independent set of vertices andvol(Y){\displaystyle {\mathrm {vol} }(Y)}denotes the sum of degrees of vertices inY{\displaystyle Y}.
Spectral graph theory emerged in the 1950s and 1960s. Besidesgraph theoreticresearch on the relationship between structural and spectral properties of graphs, another major source was research inquantum chemistry, but the connections between these two lines of work were not discovered until much later.[15]The 1980 monographSpectra of Graphs[16]by Cvetković, Doob, and Sachs summarised nearly all research to date in the area. In 1988 it was updated by the surveyRecent Results in the Theory of Graph Spectra.[17]The 3rd edition ofSpectra of Graphs(1995) contains a summary of the further recent contributions to the subject.[15]Discrete geometric analysis created and developed byToshikazu Sunadain the 2000s deals with spectral graph theory in terms of discrete Laplacians associated with weighted graphs,[18]and finds application in various fields, includingshape analysis. In most recent years, the spectral graph theory has expanded to vertex-varying graphs often encountered in many real-life applications.[19][20][21][22]
|
https://en.wikipedia.org/wiki/Spectral_graph_theory
|
Anarchyis a form ofsocietywithoutrulers. As a type ofstateless society, it is commonly contrasted withstates, which are centralized polities that claim amonopoly on violenceover a permanentterritory. Beyond a lack ofgovernment, it can more precisely refer to societies that lack any form ofauthorityorhierarchy. While viewed positively byanarchists, the primary advocates of anarchy, it is viewed negatively by advocates ofstatism, who see it in terms ofsocial disorder.
The word "anarchy" was first defined byAncient Greek philosophy, which understood it to be a corrupted form ofdirect democracy, where a majority of people exclusively pursue their own interests. This use of the word made its way intoLatinduring theMiddle Ages, before the concepts of anarchy and democracy were disconnected from each other in the wake of theAtlantic Revolutions. During theAge of Enlightenment, philosophers began to look at anarchy in terms of the "state of nature", a thought experiment used to justify various forms of hierarchical government. By the late 18th century, some philosophers began to speak in defence of anarchy, seeing it as a preferable alternative to existing forms oftyranny. This lay the foundations for the development of anarchism, which advocates for the creation of anarchy throughdecentralisationandfederalism.
As a concept,anarchyis commonly defined by what it excludes.[1]Etymologically, anarchy is derived from theGreek:αναρχία,romanized:anarchia; where "αν" ("an") means "without" and "αρχία" ("archia") means "ruler".[2]Therefore, anarchy is fundamentally defined by the absence ofrulers.[3]
While anarchy specifically represents a society without rulers, it can more generally refer to astateless society,[4]or a society withoutgovernment.[5]Anarchy is thus defined in direct contrast to theState,[6]an institution that claims amonopoly on violenceover a giventerritory.[7]Anarchists such asErrico Malatestahave also defined anarchy more precisely as a society withoutauthority,[8]orhierarchy.[9]
Anarchy is often defined synonymously as chaos orsocial disorder,[10]reflecting thestate of natureas depicted byThomas Hobbes.[11]By this definition, anarchy represents not only an absence of government but also an absence ofgovernance. This connection of anarchy with chaos usually assumes that, without government, no means of governance exist and thus that disorder is an unavoidable outcome of anarchy.[12]SociologistFrancis Dupuis-Dérihas described chaos as a "degenerate form of anarchy", in which there is an absence, not just of rulers, but of any kind of political organization.[13]He contrasts the "rule of all" under anarchy with the "rule of none" under chaos.[14]
Since its conception, anarchy has been used in both a positive and negative sense, respectively describing a free society without coercion or a state of chaos.[15]
When the word "anarchy" (Greek:αναρχία,romanized:anarchia) was first defined in ancient Greece, it initially had both a positive and negative connotation, respectively referring tospontaneous orderor chaos without rulers. The latter definition was taken by the philosopherPlato, who criticisedAthenian democracyas "anarchical", and his discipleAristotle, who questioned how to prevent democracy from descending into anarchy.[16]Ancient Greek philosophyinitially understood anarchy to be a corrupted form ofdirect democracy, although it later came to be conceived of as its own form of political regime, distinct from any kind of democracy.[17]According to the traditional conception of political regimes, anarchy results when authority is derived from a majority of people who pursue their own interests.[18]
During theMiddle Ages, the word "anarchia" came into use in Latin, in order to describe theeternal existenceof theChristian God. It later came to reconstitute its original political definition, describing a society without government.[15]
Christian theologists came to claim thatall humans were inherently sinfuland ought to submit to theomnipotenceof higher power, with the French Protestant reformerJohn Calvindeclaring that even the worst form oftyrannywas preferable to anarchy.[19]The Scottish QuakerRobert Barclayalso denounced the "anarchy" oflibertinessuch as theRanters.[20]In contrast,radical Protestantssuch as theDiggersadvocated for anarchist societies based oncommon ownership.[21]Although following attempts to establish such a society, the DiggerGerard Winstanleycame to advocate for anauthoritarian formofcommunism.[22]
During the 16th century, the term "anarchy" first came into use in theEnglish language.[23]It was used to describe the disorder that results from the absence of or opposition to authority, withJohn Miltonwriting of "the waste/Wide anarchy of Chaos" inParadise Lost.[24]Initially used as a pejorative descriptor fordemocracy, the two terms began to diverge following theAtlantic Revolutions, when democracy took on a positive connotation and was redefined as a form ofelected,representational government.[25]
Political philosophers of theAge of Enlightenmentcontrasted thestatewith what they called the "state of nature", a hypothetical description of stateless society, although they disagreed on its definition.[26]Thomas Hobbesconsidered the state of nature to be a "nightmare of permanent war of all against all".[27]In contrast,John Lockeconsidered it to be a harmonious society in which people lived "according to reason, without a common superior". They would be subject only tonatural law, with otherwise "perfect freedom to order their actions".[28]
In depicting the "state of nature" to be a free and equal society governed by natural law, Locke distinguished between society and the state.[29]He argued that, without established laws, such a society would be inherently unstable, which would make alimited governmentnecessary in order to protect people'snatural rights.[30]He likewise argued that limiting the reach of the state was reasonable when peaceful cooperation without a state was possible.[31]His thoughts on the state of nature and limited government ultimately provided the foundation for theclassical liberalargument forlaissez-faire.[32]
Immanuel Kantdefined "anarchy", in terms of the "state of nature", as a lack of government. He discussed the concept of anarchy in order to question why humanity ought to leave the state of nature behind and instead submit to a "legitimate government".[33]In contrast to Thomas Hobbes, who conceived of the state of nature as a "war of all against all" which existed throughout the world, Kant considered it to be only athought experiment. Kant believed thathuman naturedrove people to not only seek outsocietybut also to attempt to attain asuperior hierarchical status.[34]
While Kant distinguished between different forms of the state of nature, contrasting the "solitary" form against the "social", he held that there was no means ofdistributive justicein such a circumstance. He considered that, withoutlaw, ajudiciaryand means forlaw enforcement, the danger of violence would be ever-present, as each person could only judge for themselves what is right without any form of arbitration. He thus concluded that human society ought to leave the state of nature behind and submit to the authority of a state.[35]Kant argued that the threat of violence incentivises humans, by the need to preserve their own safety, to leave the state of nature and submit to the state.[36]Based on his "hypothetical imperative", he argued that if humans desire to secure their own safety, then they ought to avoid anarchy.[37]But he also argued, according to his "categorical imperative", that it is not onlyprudentbut also amoralandpolitical obligationto avoid anarchy and submit to a state.[38]Kant thus concluded that even if people did not desire to leave anarchy, they ought to as a matter of duty to abide by universal laws.[39]
In contrast,Edmund Burke's 1756 workA Vindication of Natural Society, argued in favour of anarchist society in a defense of the state of nature.[40]Burke insisted that reason was all that was needed to govern society and that "artificial laws" had been responsible for all social conflict and inequality, which led him to denounce the church and the state.[41]Burke's anti-statist arguments preceded the work of classical anarchists and directly inspired the political philosophy ofWilliam Godwin.[42]
In his 1793 bookPolitical Justice, Godwin proposed the creation of a more just and free society by abolishing government, concluding that order could be achieved through anarchy.[43]Although he came to be known as a founding father of anarchism,[44]Godwin himself mostly used the word "anarchy" in its negative definition,[45]fearing that an immediate dissolution of government without any prior political development would lead to disorder.[46]Godwin held that the anarchy could be best realised through gradual evolution, by cultivating reason through education, rather than through a sudden and violent revolution.[47]But he also considered transitory anarchy to be preferable to lastingdespotism, stating that anarchy bore a distorted resemblance to "true liberty"[45]and could eventually give way to "the best form of human society".[46]
This positive conception of anarchy was soon taken up by other political philosophers. In his 1792 workThe Limits of State Action,Wilhelm von Humboldtcame to consider an anarchist society, which he conceived of as a community built on voluntary contracts between educated individuals, to be "infinitely preferred to any State arrangements".[48]The French political philosopherDonatien Alphonse François, in his 1797 novelJuliette, questioned what form of government was best.[49]He argued that it was passion, not law, that had driven human society forward, concluding by calling for the abolition of law and a return to a state of nature by accepting anarchy.[50]He concluded by declaring anarchy to be the best form of political regime, as it was law that gave rise totyrannyand anarchic revolution that was capable of bringing down bad governments.[51]After theAmerican Revolution,Thomas Jeffersonsuggested that a stateless society might lead to greater happiness for humankind and has been attributed the maxim "that government is best which governs least". Jefferson's political philosophy later inspired the development ofindividualist anarchism in the United States, with contemporaryright-libertariansproposing that private property could be used to guarantee anarchy.[52]
Pierre-Joseph Proudhonwas the first person known to self-identify as an anarchist, adopting the label in order to provoke those that took anarchy to mean disorder.[53]Proudhon was one of the first people to use the word "anarchy" (French:anarchie) in a positive sense, to mean a free society without government.[54]To Proudhon, as anarchy did not allow coercion, it could be defined synoymously withliberty.[55]In arguing againstmonarchy, he claimed that "theRepublicis a positive anarchy ... it is the liberty that is the mother, not the daughter, of order."[54]While acknowledging this common definition of anarchy as disorder, Proudhon claimed that it was actually authoritarian government and wealth inequality that were the true causes of social disorder.[56]By counterposing this against anarchy, which he defined as an absence of rulers,[57]Proudhon declared that "just as man seeks justice in equality, society seeks order in anarchy".[58]Proudhon based his case for anarchy on his conception of a just and moral state of nature.[59]
Proudhon positedfederalismas an organizational form andmutualismas an economic form, which he believed would lead towards the end goal of anarchy.[60]In his 1863 workThe Federal Principle, Proudhon elaborated his view of anarchy as "the government of each man by himself," using the English term of "self-government" as a synonym for it.[61]According to Proudhon, under anarchy, "all citizens reign and govern" throughdirect participationin decision-making.[62]He proposed that this could be achieved through a system offederalismanddecentralisation,[63]in which every community is self-governing and any delegation of decision-making is subject toimmediate recall.[62]He likewise called for the economy to be brought underindustrial democracy, which would abolishprivate property.[64]Proudhon believed that all this would eventually lead to anarchy, as individual and collective interests aligned andspontaneous orderis achieved.[65]
Proudhon thus came to be known as the "father of anarchy" by the anarchist movement, which emerged from thelibertarian socialistfaction of theInternational Workingmen's Association(IWA).[66]Until the establishment of IWA in 1864, there had been no anarchist movement, only individuals and groups that saw anarchy as their end goal.[67]
One of Proudhon's keenest students was the Russian revolutionaryMikhail Bakunin, who adopted his critiques of private property and government, as well as his views on the desirability of anarchy.[68]During theRevolutions of 1848, Bakunin wrote of his hopes of igniting a revolutionary upheaval in theRussian Empire, writing to the German poetGeorg Herweghthat "I do not fear anarchy, but desire it with all my heart". Although he still used the negative definition of anarchy as disorder, he nevertheless saw the need for "something different: passion and life and a new world, lawless and thereby free."[69]
Bakunin popularised "anarchy" as a term,[70]using both its negative and positive definitions,[71]in order to respectively describe the disorderly destruction of revolution and the construction of a new social order in the post-revolutionary society.[72]Bakunin envisioned the creation of an "International Brotherhood", which could lead people through "the thick of popular anarchy" in asocial revolution.[73]Upon joining the IWA, in 1869, Bakunin drew up a programme for such a Brotherhood, in which he infused the word "anarchy" with a more positive connotation:[74]
We do not fear anarchy, we invoke it. For we are convinced that anarchy, meaning the unrestricted manifestation of the liberated life of the people, must spring from liberty, equality, the new social order, and the force of the revolution itself against the reaction. There is no doubt that this new life – the popular revolution – will in good time organize itself, but it will create its revolutionary organization from the bottom up, from the circumference to the center, in accordance with the principle of liberty, and not from the top down or from the center to the circumference in the manner of all authority. It matters little to us if that authority is calledChurch,Monarchy,constitutional State,bourgeois Republic, or evenrevolutionary Dictatorship. We detest and reject all of them equally as the unfailing sources of exploitation and despotism.
|
https://en.wikipedia.org/wiki/Anarchy
|
Aclass browseris a feature of anintegrated development environment(IDE) that allows the programmer to browse, navigate, or visualize the structure ofobject-oriented programmingcode.
Most modern class browsers owe their origins toSmalltalk, one of the earliest object-oriented languages and development environments. The typical Smalltalk "five-pane" browser is a series of horizontally-abutting selection panes positioned above an editing pane, the selection panes allow the user to specify first a category and then aclass, and further to refine the selection to indicate a specific class- or instance-method the implementation of which is presented in the editing pane for inspection or modification.
Most succeeding object-oriented languages differed from Smalltalk in that they werecompiledand executed in a discreteruntime environment, rather than being dynamically integrated into a monolithic system like the early Smalltalk environments. Nevertheless, the concept of a table-like or graphic browser to navigate a class hierarchy caught on.
With the popularity ofC++starting in the late-1980s, modern IDEs added class browsers, at first to simply navigate class hierarchies, and later to aid in the creation of new classes. With the introduction ofJavain the mid-1990s class browsers became an expected part of any graphic development environment.
All major development environments supply some manner of class browser, including
Modern class browsers fall into three general categories: thecolumnarbrowsers, theoutlinebrowsers, and thediagrambrowsers.
Continuing the Smalltalk tradition, columnar browsers display the class hierarchy from left to right in a series of columns. Often the rightmost column is reserved for the instance methods or variables of the leaf class.
Systems with roots in Microsoft Windows tend to use an outline-form browser, often with colorful (if cryptic) icons to denote classes and their attributes.
In the early years of the 21st century class browsers began to morph intomodeling tools, where programmers could not only visualize their class hierarchy as a diagram, but also add classes to their code by adding them to the diagram. Most of these visualization systems have been based on some form of theUnified Modeling Language(UML).
As development environments addrefactoringfeatures, many of these features have been implemented in the class browser as well as in text editors. A refactoring browser can allow a programmer to move an instance variable from one class to another simply by dragging it in the graphic user interface, or to combine or separate classes using mouse gestures rather than a large number of text editor commands.
An early add-on for DigitalkSmalltalkwas a logic browser forPrologrules encapsulated as clauses within classes. More recent logic browsers have appeared asBackTalkandSOUL(Smalltalk Open Unification Language with LiCor, or library for code reasoning) for Squeak and VisualWorks Smalltalk. A logic browser provides an interface to Prolog implemented in Smalltalk (Lispengines have often been implemented in Smalltalk). A comparable browser can be found in ILog rules and some OPS production systems.Visual PrologandXPCEprovide comparable rule browsing. In the case of SOUL, VisualWorks is provided with both a query browser and a clause browser; Backtalk provides a constraints browser. The comments ofAlan Kayon the parallel of Smalltalk and Prolog emerged in the same timeframe but with very little cross-fertilization. The interest in XSB prolog forXULand the migration of AMZI! prolog to the Eclipse IDE are current paths in logic browser evolution. Rules encapsulated in classes can be found inLogtalkand severalOOPProlog variants such asLPA Prolog,Visual PrologandAMZI!as well as mainstreamSICStus.
One variant of theSeasideweb framework in Smalltalk permits a class browser to be opened at runtime in the running web browser: an edit to a method then takes immediate effect in the running web application. In the case of Vistascript (Vista Smalltalk) for MicrosoftIE7, a right-click on the background opens a ClassHierarchyBrowser. This is somewhat like editingJavaScriptprototypes in a web browser orRuby,GroovyorJythonclasses in anIDErunning in aJVM.
|
https://en.wikipedia.org/wiki/Class_browser
|
Agovernmentis the system or group of people governing an organized community, generally astate.
In the case of its broad associative definition, government normally consists oflegislature,executive, andjudiciary. Government is a means by which organizationalpoliciesare enforced, as well as a mechanism for determining policy. In many countries, the government has a kind ofconstitution, a statement of its governing principles and philosophy.
While all types of organizations havegovernance, the termgovernmentis often used more specifically to refer to the approximately 200independent national governmentsandsubsidiary organizations.
The main types of modernpolitical systemsrecognized aredemocracies,totalitarian regimes, and, sitting between these two,authoritarian regimeswith a variety ofhybrid regimes.[1][2]Modern classification systems also includemonarchiesas a standalone entity or as a hybrid system of the main three.[3][4]Historically prevalent forms of government include monarchy,aristocracy,timocracy,oligarchy,democracy,theocracy, andtyranny. These forms are not always mutually exclusive, andmixed governmentsare common. The main aspect of any philosophy of government is how political power is obtained, with the two main forms beingelectoral contestandhereditary succession.
A government is thesystemtogovernastateor community. TheCambridge Dictionarydefines government as, "the system used for controlling a country, city, or group of people", or "an organization that officially manages and controls a country or region, creating laws, collecting taxes, providing public services".[5]While all types of organizations havegovernance, the wordgovernmentis often used more specifically to refer to the approximately 200independent national governmentson Earth, as well as their subsidiary organizations, such asstate and provincial governmentsas well aslocal governments.[6]
The wordgovernmentderives from the Greek verbκυβερνάω[kubernáo] meaningto steerwith agubernaculum(rudder), the metaphorical sense being attested in the literature ofclassical antiquity, includingPlato'sShip of State.[7]InBritish English, "government" sometimes refers to what's also known as a "ministry" or an "administration", i.e., the policies and government officials of a particular executive or governingcoalition. Finally,governmentis also sometimes used in English as asynonymfor rule or governance.[8]
In other languages,cognatesmay have a narrower scope, such as thegovernment of Portugal, which is more similar to the concept of"administration".
The moment and place that the phenomenon of human government developed is lost in time; however, history does record the formations of early governments. About 5,000 years ago, the first small city-states appeared.[9]By the third to second millenniums BC, some of these had developed into larger governed areas:Sumer,ancient Egypt, theIndus Valley civilization, and theYellow River civilization.[10]
One reason that explains the emergence of governments includes agriculture. Since theNeolithic Revolution, agriculture has been an efficient method to create food surplus. This enabled people to specialize in non-agricultural activities. Some of them included being able to rule over others as an external authority. Others included social experimentation with diverse governance models. Both these activities formed the basis of governments.[11]These governments gradually became more complex as agriculture supported larger and denser populations, creating newinteractionsandsocial pressuresthat the government needed to control.David Christianexplains
As farming populations gathered in larger and denser communities, interactions between different groups increased and the social pressure rose until, in a striking parallel with star formation, new structures suddenly appeared, together with a new level of complexity. Like stars, cities and states reorganize and energize the smaller objects within their gravitational field.[9]
Another explanation includes the need to properly manage infrastructure projects such as water infrastructure. Historically, this required centralized administration and complex social organisation, as seen in regions like Mesopotamia.[12]However, there is archaeological evidence that shows similar successes with more egalitarian and decentralized complex societies.[13]
Starting at the end of the 17th century, the prevalence of republican forms of government grew. TheEnglish Civil WarandGlorious Revolutionin England, theAmerican Revolution, and theFrench Revolutioncontributed to the growth of representative forms of government. TheSoviet Unionwas the first large country to have aCommunistgovernment.[6]Since the fall of theBerlin Wall,liberal democracyhas become an even more prevalent form of government.[14]
In the nineteenth and twentieth centuries, there was a significant increase in the size and scale of government at the national level.[15]This included the regulation of corporations and the development of thewelfare state.[14]
In political science, it has long been a goal to create a typology or taxonomy ofpolities, as typologies of political systems are not obvious.[16]It is especially important in thepolitical sciencefields ofcomparative politicsandinternational relations. Like all categories discerned within forms of government, the boundaries of government classifications are either fluid or ill-defined.
Superficially, all governments have an officialde jureor ideal form. The United States is a federal constitutional republic, while the formerSoviet Unionwas a federalsocialist republic. However, self-identification is not objective, and as Kopstein and Lichbach argue, defining regimes can be tricky, especiallyde facto, when both its government and its economy deviate in practice.[17]For example,Voltaireargued that "theHoly Roman Empireis neither Holy, nor Roman, nor an Empire".[18]In practice, the Soviet Union was a centralized autocratic one-party state underJoseph Stalin.
Identifying a form of government can be challenging because manypolitical systemsoriginate from socio-economic movements, and the parties that carry those movements into power often name themselves after those ideologies. These parties may have competing political ideologies and strong ties to particular forms of government. As a result, the movements themselves can sometimes be mistakenly considered as forms of government, rather than the ideologies that influence the governing system.[19]
Other complications include general non-consensus or deliberate "distortion or bias" of reasonable technical definitions of political ideologies and associated forms of governing, due to the nature of politics in the modern era. For example: The meaning of "conservatism" in the United States has little in common with the way the word's definition is used elsewhere. As Ribuffo notes, "what Americans now call conservatism much of the world calls liberalism orneoliberalism"; a "conservative" in Finland would be labeled a "socialist" in the United States.[20]Since the 1950s, conservatism in the United States has been chiefly associated withright-wing politicsand theRepublican Party. However, during the era ofsegregationmanySouthern Democratswere conservatives, and they played a key role in theconservative coalitionthat controlled Congress from 1937 to 1963.[21][a]
Opinions vary by individuals concerning the types and properties of governments that exist. "Shades of gray" are commonplace in any government and its corresponding classification. Even the most liberal democracies limit rival political activity to one extent or another while the most tyrannical dictatorships must organize a broad base of support thereby creating difficulties for "pigeonholing" governments into narrow categories. Examples include the claims of theUnited States as being a plutocracyrather than a democracy since some American voters believe elections are being manipulated by wealthySuper PACs.[22]Some consider that government is to be reconceptualised where in times of climatic change the needs and desires of the individual are reshaped to generate sufficiency for all.[23]
The quality of a government can be measured byGovernment effectiveness index, which relates topolitical efficacyandstate capacity.[24]
Platoin his bookThe Republic(375 BC) divided governments into five basic types (four being existing forms and one being Plato's ideal form, which exists "only in speech"):[25]
These five regimes progressively degenerate starting with aristocracy at the top and tyranny at the bottom.[26]
In hisPolitics, Aristotle elaborates on Plato's five regimes discussing them in relation to the government of one, of the few, and of the many.[27]From this follows the classification of forms of government according to which people have the authority to rule: either one person (anautocracy, such as monarchy), a select group of people (an aristocracy), or the people as a whole (a democracy, such as a republic).
Thomas Hobbesstated on their classification:
The difference ofCommonwealthsconsisteth in the difference of thesovereign, or the person representative of all and every one of the multitude. And because the sovereignty is either in one man, or in an assembly of more than one; and into that assembly either every man hath right to enter, or not everyone, but certain men distinguished from the rest; it is manifest there can be but three kinds of Commonwealth. For the representative must need to be one man or more; and if more, then it is the assembly of all, or but of a part. When the representative is one man, then is the Commonwealth a monarchy; when an assembly of all that will come together, then it is a democracy or popular Commonwealth; when an assembly of a part only, then it is called an aristocracy. In other kinds of Commonwealth there can be none: for either one, or more, or all, must have the sovereign power (which I have shown to be indivisible) entire.[28]
According toYaleprofessorJuan José Linz, there a three main types ofpolitical systemstoday:democracies,totalitarian regimesand, sitting between these two,authoritarian regimeswithhybrid regimes.[2][29]Another modern classification system includesmonarchiesas a standalone entity or as a hybrid system of the main three.[3]Scholars generally refer to adictatorshipas either a form of authoritarianism or totalitarianism.[30][2][31]
An autocracy is a system of government in which supremepoweris concentrated in the hands of one person, whose decisions are subject to neither external legal restraints nor regularized mechanisms of popular control (except perhaps for the implicit threat of acoup d'étator massinsurrection).[32]Absolute monarchyis a historically prevalent form of autocracy, wherein amonarchgoverns as a singularsovereignwith no limitation onroyal prerogative. Most absolute monarchies arehereditary, however some, notably theHoly See, areelectedby anelectoral college(such as thecollege of cardinals, orprince-electors). Other forms of autocracy includetyranny,despotism, anddictatorship.
Aristocracy[b]is a form of government that places power in the hands of a small,eliteruling class,[33]such as a hereditarynobilityorprivilegedcaste. This class exercisesminority rule, often as alandedtimocracy, wealthyplutocracy, oroligarchy.
Many monarchies were aristocracies, although in modern constitutional monarchies, the monarch may have little effective power. The termaristocracycould also refer to the non-peasant, non-servant, and non-cityclasses infeudalism.[34]
Democracy is a system of government wherecitizensexercise power byvotinganddeliberation. In adirect democracy, the citizenry as a whole directly forms aparticipatorygoverning body and vote directly on each issue. Inindirect democracy, the citizenry governs indirectly through the selection ofrepresentativesordelegatesfrom among themselves, typically byelectionor, less commonly, bysortition. These select citizens then meet to form a governing body, such as a legislature orjury.
Some governments combine both direct and indirect democratic governance, wherein the citizenry selects representatives to administer day-to-day governance, while also reserving the right to govern directly throughpopular initiatives,referendums(plebiscites), and theright of recall. In aconstitutional democracythe powers of the majority are exercised within the framework of representative democracy, but the constitution limitsmajority rule, usually through the provision by all of certainuniversal rights, such asfreedom of speechorfreedom of association.[35][36]
A republic is a form of government in which the country is considered a "public matter" (Latin:res publica), not the private concern or property of the rulers, and where offices of states are subsequently directly or indirectly elected or appointed rather than inherited. The people, or some significant portion of them, have supreme control over the government and where offices of state are elected or chosen by elected people.[37][38]
A common simplified definition of a republic is a government where the head of state is not a monarch.[39][40]Montesquieuincluded bothdemocracies, where all the people have a share in rule, andaristocraciesoroligarchies, where only some of the people rule, as republican forms of government.[41]
Other terms used to describe different republics includedemocratic republic,parliamentary republic,semi-presidential republic,presidential republic,federal republic,people's republic, andIslamic republic.
Federalism is a political concept in which agroupof members are bound together bycovenantwith a governingrepresentative head. The term "federalism" is also used to describe a system of government in whichsovereigntyis constitutionally divided between a central governing authority and constituent political units, variously called states, provinces or otherwise. Federalism is a system based upon democratic principles and institutions in which the power to govern is shared between national and provincial/state governments, creating what is often called afederation.[42]Proponents are often calledfederalists.
Governments are typically organised into distinct institutions constituting branches of government each with particularpowers, functions, duties, and responsibilities. The distribution of powers between these institutions differs between governments, as do the functions and number of branches. An independent, parallel distribution of powers between branches of government is theseparation of powers. A shared, intersecting, or overlapping distribution of powers is thefusion of powers.
Governments are often organised into three branches with separate powers: a legislature, an executive, and a judiciary; this is sometimes called thetrias politicamodel. However, inparliamentaryandsemi-presidential systems, branches of government often intersect, having shared membership and overlapping functions. Many governments have fewer or additional branches, such as an independentelectoral commissionorauditorybranch.[43]
Presently, most governments are administered by members of an explicitly constitutedpolitical partywhich coordinates the activities of associated governmentofficialsandcandidatesfor office. In amultiparty systemof government, multiple political parties have the capacity to gain control of government offices, typically by competing inelections, although theeffective number of partiesmay be limited.
Amajority governmentis a government by one or moregoverning partiestogether holding an absolute majority of seats in the parliament, in contrast to aminority governmentin which they have only a plurality of seats and often depend on aconfidence-and-supplyarrangement with other parties. Acoalition governmentis one in which multiple parties cooperate to form a government as part of acoalition agreement. In a single-party government, a single party forms a government without the support of a coalition, as is typically the case with majority governments,[44][45]but even a minority government may consist of just one party unable to find a willing coalition partner at the moment.[46]
A state that continuously maintains a single-party government within a (nominally) multiparty system possesses adominant-party system. In a (nondemocratic)one-party systema singleruling partyhas the (more-or-less) exclusive right to form the government, and the formation of other parties may be obstructed or illegal. In some cases, a government may have anon-partisan system, as is the case withabsolute monarchyornon-partisan democracy.
Democracy is the most popular form of government. More than half of the nations in the world are democracies—97 of 167, as of 2021.[47]However, the world is becoming more authoritarian with a quarter of the world's population underdemocratically backslidinggovernments.[47]
|
https://en.wikipedia.org/wiki/Forms_of_government
|
Aheterarchyis a system of organization where the elements of the organization are unranked (non-hierarchical) or where they possess the potential to be ranked a number of different ways.[1]Definitions of the term vary among the disciplines: in social and information sciences, heterarchies arenetworksof elements in which each element shares the same "horizontal" position of power and authority, each playing a theoretically equal role. In biological taxonomy, however, the requisite features of heterarchy involve, for example, a species sharing, with a species in a differentfamily, a common ancestor which it does not share with members of its own family. This is theoretically possible under principles of "horizontal gene transfer".
A heterarchy may be orthogonal to ahierarchy, subsumed to a hierarchy, or it may contain hierarchies; the two kinds of structure are not mutually exclusive. In fact, each level in a hierarchical system is composed of a potentially heterarchical group.
The concept of heterarchy was first employed in a modern context bycyberneticianWarren McCullochin 1945.[2]As Carole L. Crumley has summarised, "[h]e examined alternativecognitivestructure(s), the collective organization of which he termed heterarchy. He demonstrated that the human brain, while reasonably orderly was not organized hierarchically. This understanding revolutionized the neural study of the brain and solved major problems in the fields ofartificial intelligenceand computer design."[3]
In a group of related items, heterarchy is a state wherein any pair of items is likely to be related in two or more differing ways. Whereas hierarchies sort groups into progressively smaller categories and subcategories, heterarchies divide and unite groups variously, according to multiple concerns that emerge or recede from view according to perspective. Crucially, no one way of dividing a heterarchical system can ever be a totalizing or all-encompassing view of the system, each division is clearly partial, and in many cases, a partial division leads us, as perceivers, to a feeling of contradiction that invites a new way of dividing things. (But of course the next view is just as partial and temporary.) Heterarchy is a name for this state of affairs, and a description of a heterarchy usually requires ambivalent thought, a willingness to ambulate freely between unrelated perspectives.
However, because the requirements for a heterarchical system are not exactly stated, identifying a heterarchy through the use of archaeological materials can often prove to be difficult.[4]
In an attempt to operationalize heterarchies, Schoenherr and Dopko[5]use the concept of reward systems andRelational models theory.Relational modelsare defined by distinct expectations for exchanges between individuals in terms of authority ranking, equality matching, communality, and market pricing. They suggest that discrepancies in the kind of reward that is used to assign merit and differences in merit assigned to specific groups of individuals can be used as evidence for heterarchical structure. Their study demonstrates differences in the number of women assigned PhDs, the number of women receiving academic appointments in high status academic institutions, and scientific awards.
Examples of heterarchical conceptualizations include theGilles Deleuze/Félix Guattariconceptions ofdeterritorialization,rhizome, andbody without organs.
Numerous observers[who?]in the information sciences have argued that heterarchical structure processes more information more effectively than hierarchical design. An example of the potential effectiveness of heterarchy would be the rapid growth of the heterarchicalWikipediaproject in comparison with the failed growth of theNupediaproject.[6]Heterarchy increasingly trumps hierarchy as complexity and rate of change increase.
Informational heterarchy can be defined as an organizational form somewhere between hierarchy and network that provides horizontal links that permit different elements of an organization to cooperate whilst individually optimizing different success criteria. In an organizational context the value of heterarchy derives from the way in which it permits the legitimate valuation of multiple skills, types of knowledge or working styles without privileging one over the other. In information science, therefore, heterarchy,responsible autonomyandhierarchyare sometimes combined under the umbrella termTriarchy.
This concept has also been applied to the field ofarchaeology, where it has enabled researchers to better understand social complexity. For further reading see the works of Carole Crumley.
The term heterarchy is used in conjunction with the concepts ofholonsandholarchyto describe individualsystemsat each level of a holarchy.
A heterarchical network could be used to describeneuronconnections or democracy, although there are clearly hierarchical elements in both.[7]
AnthropologistDmitri Bondarenkofollows Carole Crumley in her definition of heterarchy as "the relation of elements to one another when they are unranked or when they possess the potential for being ranked in a number of different ways" and argues that it is therefore not strictly the opposite of hierarchy, but is rather the opposite ofhomoarchy,[8]itself definable as "the relation of elements to one another when they possess the potential for being ranked in one way only".[9]
David C. Stark(1950- ) has been contributing to developing the concept of heterarchy in thesociology of organizations.
Politicalhierarchiesand heterarchies are systems in which multiple dynamic power-structures govern the actions of the system. They represent different types ofnetworkstructures that allow differing degrees of connectivity. In a (tree-structured)hierarchyeverynodeis connected to at most oneparent nodeand to zero or morechild nodes. In a heterarchy, however, a node can be connected to any of its surrounding nodes without needing to go through or to get permission from some other node.
Socially, a heterarchy distributesprivilegeand decision-making among participants, while a hierarchy assigns more power and privilege to the members "high" in the structure. In a systemic perspective, Gilbert Probst, Jean-Yves Mercier and others describe heterarchy as the flexibility of the formal relationships inside an organization.[10]Domination and subordination links can be reversed and privileges can be redistributed in each situation, following the needs of the system.[11]
Researchers have also framed higher-education staff as operating in a heterarchical structure. Examiningsex-based discriminationin psychology, Schoenherr and Dopko[5]identify discrepancies between the number of women awarded PhDs, the number of professorships held by women, and the number of scientific awards granted to women in the behavioral sciences and by the American Psychological Association. They argue that this data supports difference reward systems, representing heterarchies. They go on to connect the notion of heterarchy to contemporary models of relational structures in psychology (i.e., relational models theory). Schoenherr[12]has argued that this is also reflected in divisions within professional psychology, such as those between clinical psychologists and experimental psychologists. Using the history of professional psychology in Canada and the United States, he provides quotations from professional organization to illustrate the disparate identities and reward-systems. Rather than just reflecting a feature of psychological science, these[which?]case studies were presented as evidence of heterarchies in academia and in social organizations more generally.
|
https://en.wikipedia.org/wiki/Heterarchy
|
Hierarchicalclassificationis a system of grouping things according to a hierarchy.[1]
In the field ofmachine learning, hierarchical classification is sometimes referred to asinstance space decomposition,[2]which splits a completemulti-classproblem into a set of smaller classification problems.
Thisartificial intelligence-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Hierarchical_classifier
|
Hierarchical epistemologyis atheory of knowledgewhich posits that beings have different access torealitydepending on theirontologicalrank.[1]
This article aboutepistemologyis astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Hierarchical_epistemology
|
Hierarchical INTegration, orHINTfor short, is a computerbenchmarkthat ranks a computer system as a whole (i.e. the entire computer instead of individual components). It measures the full range of performance, mostly based on the amount of work a computer can perform over time. A system with a very fast processor would likely be rated poorly if thebuseswere very poor compared to those of another system that had both an average processor and average buses. For example, in the past,Macintoshcomputers with relatively slow processor speeds (800 MHz) used to perform better thanx86based systems with processors running at nearly 2 GHz.
HINT is known for being almost immune to artificial optimization and can be used by many computers ranging from a calculator to a supercomputer. It was developed at theU.S. Department of Energy'sAmes Laboratoryand is licensed under the terms of theGNU General Public License.
HINT is intended to be "scalable" to run on any size computer, from small serial systems to highly parallel supercomputers.[1]The person using the HINT benchmark can use any floating-point or integer type.[2]
HINT benchmark results have been published comparing a variety of parallel and uniprocessor systems.[3]
A related tool ANALYTIC HINT can be used as a design tool to estimate the benefits of using more memory, a faster processor, or improved communications (bus speed) within the system.[4]
Thiscomputer hardwarearticle is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Hierarchical_INTegration
|
Ahierarchical organizationorhierarchical organisation(seespelling differences) is anorganizational structurewhere every entity in theorganization, except one, issubordinateto a single other entity.[1]This arrangement is a form ofhierarchy. In an organization, this hierarchy usually consists of a singular/group ofpowerat the top with subsequent levels of power beneath them. This is the dominant mode of organization among large organizations; mostcorporations,governments, criminal enterprises, andorganized religionsare hierarchical organizations with different levels ofmanagementpower orauthority.[2]For example, the broad, top-level overview of the hierarchy of theCatholic Churchconsists of thePope, then theCardinals, then theArchbishops, and so on. Another example is the hierarchy between the four castes in theHindu caste system, which arises from the religious belief "that each is derived from a different part of the creator God’s (Brahma) body, descending from the head downwards."[3]
Members of hierarchical organizational structures mainly communicate with their immediate superior and their immediate subordinates. Structuring organizations in this way is useful, partly because it reduces the communication overhead costs by limiting information flows.[2]
A hierarchy is typically visualized as apyramid, where the height of the ranking or person depicts their power status and the width of that level represents how many people or business divisions are at that level relative to the whole—the highest-ranking people are at theapex, and there are very few of them, and in many cases only one; thebasemay include thousands of people who have no subordinates. These hierarchies are typically depicted with atreeortrianglediagram, creating anorganizational chartor organogram. Those nearest the top have more power than those nearest the bottom, and there being fewer people at the top than at the bottom.[2]As a result, superiors in a hierarchy generally have higherstatusand obtain highersalariesand other rewards than their subordinates.[4]
Although the image of organizational hierarchy as a pyramid is widely used, strictly speaking such a pyramid (or organizational chart as its representation) draws on two mechanisms:hierarchyanddivision of labour. As such, a hierarchy can, for example, also entail a boss with a single employee.[5]When such a simple hierarchy grows by subordinates specialising (e.g. inproduction,sales, andaccounting) and subsequently also establishing and supervising their own (e.g. production, sales, accounting) departments, the typical pyramid arises. This specialisation process is calleddivision of labour.
Governmental organizations and mostcompaniesfeature similar hierarchical structures.[4]Traditionally, themonarchstood at the pinnacle of thestate. In many countries,feudalismandmanorialismprovided a formalsocial structurethat established hierarchical links pervading every level of society, with the monarch at the top.
In modern post-feudal states the nominal top of the hierarchy still remains ahead of state– sometimes apresidentor aconstitutional monarch, although in many modern states the powers of the head of state are delegated among different bodies. Below or alongside this head there is commonly asenate,parliamentorcongress; such bodies in turn often delegate the day-to-day running of the country to aprime minister, who may head acabinet. In manydemocracies, constitutions theoretically regard"the people"as the notional top of the hierarchy, above the head of state; in reality, the people's influence is often restricted to voting in elections or referendums.[6][7][8]
Inbusiness, thebusiness ownertraditionally occupies the pinnacle of theorganization. Most modern large companies lack a single dominantshareholderand for most purposes delegate the collective power of the business owners to aboard of directors, which in turn delegates the day-to-day running of the company to amanaging directororCEO.[9]Again, although the shareholders of the company nominally rank at the top of the hierarchy, in reality many companies are run at least in part as personal fiefdoms by theirmanagement.[10]Corporate governancerules attempt to mitigate this tendency.
Smaller and more informal social units –families,bands,tribes,special interest groups– which may form spontaneously, have little need for complex hierarchies[11]– or indeed for any hierarchies. They may rely onself-organizingtendencies.
A conventional view ascribes the growth of hierarchical social habits and structures to increased complexity;[12]thereligious syncretism[13]and issues oftax-gathering[14]in expanding empires played a role here.
However, others have observed that simple forms of hierarchicalleadershipnaturally emerge from interactions in bothhumanandnon-humanprimatecommunities. For instance, this occurs when a few individuals obtain more status in theirtribe, (extended)familyorclan, or whencompetencesandresourcesare unequally distributed across individuals.[15][16][17]
Theorganizational developmenttheoristElliott Jaquesidentified a special role for hierarchy in his concept ofrequisite organization.[5]
Theiron law of oligarchy, introduced byRobert Michels, describes the inevitable tendency of hierarchical organizations to becomeoligarchicin their decision making.[18]
ThePeter Principleis a term coined byLaurence J. Peterin which the selection of a candidate for a position in an hierarchical organization is based on the candidate's performance in their current role, rather than on abilities relevant to the intended role. Thus, employees only stop being promoted once they can no longer perform effectively, and managers in an hierarchical organization "rise to the level of their incompetence."
Hierarchiologyis another term coined by Laurence J. Peter, described in his humorous book of the same name, to refer to the study of hierarchical organizations and the behavior of their members.
Having formulated the Principle, I discovered that I had inadvertently founded a new science, hierarchiology, the study of hierarchies. The term hierarchy was originally used to describe the system of church government by priests graded into ranks. The contemporary meaning includes any organization whose members or employees are arranged in order of rank, grade or class. Hierarchiology, although a relatively recent discipline, appears to have great applicability to the fields of public and private administration.
David Andrews' bookThe IRG Solution: Hierarchical Incompetence and how to Overcome itargued that hierarchies were inherently incompetent, and were only able to function due to large amounts of informallateral communicationfostered by private informal networks.
Hierarchical organization is a phenomenon with many faces. To understand and map this diversity, varioustypologieshave been developed. Formal versus informal hierarchy is a well-known typology, but one can also
distinguish four hierarchy types.
A well-known distinction is between formal and informal hierarchy in organizational settings. According toMax Weber, the formal hierarchy is the verticalsequenceof official positions within one explicitorganizational structure, whereby each position or office is under the control andsupervisionof a higher one.[19]Theformal hierarchycan thus be defined as "an official system of unequal person-independent roles and positions which are linked via lines of top-down command-and-control."[20]By contrast, aninformal hierarchycan be defined as person-dependent social relationships of dominance and subordination, emerging from social interaction and becoming persistent over time through repeated social processes.[20]The informal hierarchy between two or more people can be based on difference in, for example,seniority,experienceorsocial status.[20][17]The formal and informal hierarchy may complement each other in any specific organization and therefore tend toco-existin any organization.[17]But the general pattern observed in many organizations is that when the formal hierarchy decreases (over time), the informal hierarchy increases, or vice versa.[20]
A more elaboratetypologyof hierarchy in social systems entails four types: hierarchy as a ladder of formal authority, ladder of achieved status, self-organized ladder of responsibility, and an ideology-based ladder.[21]The first two types can be equated with the formal and informal hierarchy, as previously defined. Accordingly, this typology extends the formal and informal hierarchy with two other types.
This type of hierarchy is defined as a sequence of levels of formalauthority, that is, the authority tomake decisions.[21][22][23][2]This results in a ladder that systematically differentiates the authority to make decisions. A typical authority-based hierarchy incompaniesis: theboard of directors,CEO, departmentalmanagers,team leaders, and otheremployees.[21]The authority-based hierarchy, also known as the formal hierarchy, to a large extent arises from the legal structure of the organization: for example, the owner of the firm is also the CEO or appoints the CEO, who in turn appoints and supervises departmental managers, and so forth.[21]
Also known as the informal hierarchy (defined earlier), this type of hierarchy draws on unofficial mechanisms for ranking people.[24][25]It involves differences instatus, other than those arising from formal authority. Status is one's social standing or professional position, relative to those of others.[26][27]In anthropology and sociology, this notion of status is also known asachieved status, the social position that is earned instead of beingascribed.[28][29]The underlying mechanism issocial stratification, which draws on shared cultural beliefs (e.g. regarding expertise and seniority as drivers of status) that can make status differences between people appear natural and fair.[30][31]A ladder of achieved status issocially constructed, which makes it fundamentally different from the ladder of authority that (largely) arises from an underlying legal structure.[21]The social-constructivist nature of status also implies that ladders of achieved status especially arise in groups of people that frequently interact—for example, a work unit, team, family, or neighbourhood.[32][33][25][27]
In the literature on organizationdesignandagility, hierarchy is conceived as arequisitestructure that emerges in aself-organizedmanner from operational activities.[21][5][34][35]For example, a small firm composed of only three equivalent partners can initially operate without any hierarchy; but substantial growth in terms of people and their tasks will create the need for coordination and related managerial activities; this implies, for example, that one of the partners starts doing these coordination activities. Another example involves organizations adoptingholacracyorsociocracy, with people at all levels self-organizing their responsibilities;[34][35][36]that is, they exercise "real" rather than formal authority.[37]In this respect,responsibilityis an expression of self-restraint and intrinsicobligation.[38][39]Examples of self-organized ladders of responsibility have also been observed in (the early stages of)worker cooperatives, likeMondragon, in which hierarchy is created in a bottom-up manner.[40]
In a hierarchy driven byideology, people establish themselves as legitimateleadersby invoking some (e.g., religious, spiritual or political) idea to justify the hierarchical relationship between higher and lower levels.[41][42][43]Ideological hierarchies have a long history, for example in the administrative hierarchies headed bypharaohsinancient Egyptor those headed bykingsinmedieval Europe.[44]The mainlegitimacyof any pharaoh or king arose from the strong belief in the idea that the pharaoh/king acts as theintermediarybetween the gods and the people, and thus deputizes for the gods.[44]Another example is the hierarchy prevailing until today in theBalinesecommunity, which is strongly connected to the rice cycle that is believed to constitute a hierarchical relationship between gods and humans, both of whom must play their parts to secure a good crop; the same ideology also legitimizes the hierarchical relationship between high and low castes in Bali.[43]Ideological ladders have also long sustained the way theCatholic churchand theHindu caste systemoperates.[4]Hierarchies of ideology also exist in many other settings, for instance, those driven by prevailingvaluesandbeliefsabout how the (e.g. business) world should operate.[45][46]An example is the ideology of "maximizingshareholder value", which is widely used inpublicly traded companies.[10]This ideology helps in creating and sustaining the image of a clear hierarchy from shareholders to employees—although, in practice, the separation of legal ownership and actual control implies that theCEOtogether with theBoard of Directorsare at the top of the corporate hierarchy.[9]Given that public corporations (primarily) thrive on ladders of authority; this example also demonstrates how ladders of authority and ideology can complement and reinforce each other.[21]
The work of diverse theorists such asWilliam James(1842–1910),Michel Foucault(1926–1984) andHayden White(1928–2018) makes important critiques of hierarchicalepistemology. James famously asserts in his work onradical empiricismthat clear distinctions of type and category are a constant but unwritten goal of scientific reasoning, so that when they are discovered, success is declared.[citation needed]But if aspects of the world are organized differently, involving inherent and intractable ambiguities, then scientific questions are often considered unresolved. A hesitation to declare success upon the discovery of ambiguities leaves heterarchy at an artificial and subjective disadvantage in the scope of human knowledge. This bias is an artifact of an aesthetic or pedagogical preference for hierarchy, and not necessarily an expression of objective observation.[citation needed]
Hierarchies and hierarchical thinking have been criticized by many people, includingSusan McClary(born 1946), and by one political philosophy which vehemently opposes hierarchical organization:anarchism.Heterarchy, the most commonly proposed alternative to hierarchy, has been combined with responsible autonomy byGerard Fairtloughin his work ontriarchy theory. The most beneficial aspect of a hierarchical organization is the clear command-structure that it establishes. However, hierarchy may become dismantled byabuse of power.[47]
Matrix organizationsbecame a trend (ormanagement fad) in the second half of the 20th century.[48]
Amidst constant innovation ininformation and communication technologies, hierarchical authority structures are giving way to greaterdecision-makinglatitude for individuals and more flexible definitions of job activities; and this new style of work presents a challenge to existing organizational forms, with some[quantify]research studies contrasting traditional organizational forms with groups that operate asonline communitiesthat are characterized by personal motivation and the satisfaction of making one's own decisions.[49]When all levels of a hierarchical organization have access to information and communication via digital means,power structuresmay align more as awirearchy, enabling the flow of power and authority to be based not on hierarchical levels, but on information, trust, credibility, and a focus on results.[citation needed]
|
https://en.wikipedia.org/wiki/Hierarchical_organization
|
TheHierarchical Music Specification Language(HMSL) is amusicprogramming languagewritten in the 1980s byLarry Polansky,Phil Burk, andDavid RosenboomatMills College.[1]Written on top ofForth, it allowed for the creation of real-time interactive music performance systems,algorithmic compositionsoftware, and any other kind of program that requires a high degree of musicalinformatics. It was distributed by Frog Peak Music, and runs with a very lightmemory footprint(~1megabyte) onMacintoshandAmigasystems.
UnlikeCSoundand other languages for audiosynthesis, HMSL is primarily a language for makingmusic. As such, it interfaces with sound-making devices through built-inMIDIclasses. However, it has a high degree of built-in understanding of musicperformance practice,tuning systems, andscorereading. Its main interface for the manipulation of musicalparametersis through the metaphor ofshapes, which can be created, altered, and combined to create a musicaltexture, either by themselves or in response to real-time orscheduledevents in a score.
HMSL has been widely used by composers working in algorithmic composition for over twenty years. In addition to the authors (who are also composers), HMSL has been used in pieces byNick Didkovsky,The Hub,James Tenney,Tom Erbe, andPauline Oliveros.
AJavaport of HMSL was developed byNick Didkovskyunder the nameJMSL, and is designed to interface to theJSynAPI.
HMSL is licensed under the freeApache License V2.
|
https://en.wikipedia.org/wiki/Hierarchical_Music_Specification_Language
|
Anopen service interface definition(OSID) is a programmatic interface specification describing a service. These interfaces are specified by theOpen Knowledge Initiative(OKI) to implement aservice-oriented architecture(SOA) to achieveinteroperabilityamong applications across a varied base of underlying and changing technologies.
To preserve the investment in software engineering, program logic is separated from underlying technologies through the use of software interfaces each of which defines a contract between a service consumer and a service provider. This separation is the basis of any valid SOA. While some methods define the service interface boundary at a protocol or server level, OSIDs place the boundary at the application level to effectively insulate the consumer fromprotocols, server identities, and utility libraries that are in the domain to a service provider resulting in software which is easier to develop, longer-lasting, and usable across a wider array of computing environments.
OSIDs assist insoftware designand development by breaking up the problem space across service interface boundaries. Because network communication issues are addressed within a service provider andbelowthe interface, there isn't an assumption that every service provider implement a remote communications protocol (though many do). OSIDs are also used for communication and coordination among the various components of complex software which provide a means of organizing design and development activities for simplifiedproject management.
OSID providers (implementations) are often reused across a varied set of applications. Once software is made to understand the interface contract for a service, other compliant implementations may be used in its place. This achievesreusabilityat a high level (a service level) and also serves to easily scale software written for smaller more dedicated purposes.
An OSID provider implementation may be composed of an arbitrary number of other OSID providers. This layering technique is an obvious means ofabstraction. When all the OSID providers implement the same service, this is called anadapterpattern. Adapter patterns are powerful techniques to federate, multiplex, or bridge different services contracting from the same interface without the modification to the application.
This Internet-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Hierarchy_Open_Service_Interface_Definition
|
Intheoretical physics, thehierarchy problemis the problem concerning the large discrepancy between aspects of the weak force and gravity.[1]There is no scientific consensus on why, for example, theweak forceis 1024times stronger thangravity.
A hierarchy problem[2]occurs when the fundamental value of some physical parameter, such as acoupling constantor a mass, in someLagrangianis vastly different from its effective value, which is the value that gets measured in an experiment. This happens because the effective value is related to the fundamental value by a prescription known asrenormalization, which applies corrections to it.
Typically the renormalized value of parameters are close to their fundamental values, but in some cases, it appears that there has been a delicate cancellation between the fundamental quantity and the quantum corrections. Hierarchy problems are related tofine-tuning problemsand problems of naturalness.
Throughout the 2010s, many scientists[3][4][5][6][7]argued that the hierarchy problem is a specific application ofBayesian statistics.
Studying renormalization in hierarchy problems is difficult, because such quantum corrections are usually power-law divergent, which means that the shortest-distance physics are most important. Because we do not know the precise details of thequantum gravity, we cannot even address how this delicate cancellation between two large terms occurs. Therefore, researchers are led to postulate new physical phenomena that resolve hierarchy problems without fine-tuning.
Suppose a physics model requires four parameters to produce a very high-quality working model capable of generating predictions regarding some aspect of our physical universe. Suppose we find through experiments that the parameters have values: 1.2, 1.31, 0.9 and a value near4×1029. One might wonder how such figures arise. In particular, one might be especially curious about a theory where three values are close to one, and the fourth is so different; i.e., the huge disproportion we seem to find between the first three parameters and the fourth. If one force is so much weaker than the others that it needs a factor of4×1029to allow it to be related to the others in terms of effects, we might also wonder how our universe come to be so exactly balanced when its forces emerged. In currentparticle physics, the differences between some actual parameters are much larger than this, so the question is noteworthy.
One explanation given by philosophers is theanthropic principle. If the universe came to exist by chance and vast numbers of other universes exist or have existed, then lifeforms capable of performing physics experiments only arose in universes that, by chance, had very balanced forces. All of the universes where the forces were not balanced did not develop life capable of asking this question. So if lifeforms likehuman beingsare aware and capable of asking such a question, humans must have arisen in a universe having balanced forces, however rare that might be.[8][9]
A second possible answer is that there is a deeper understanding of physics that we currently do not possess. There may be parameters from which we can derive physical constants that have fewer unbalanced values, or there may be a model with fewer parameters.[citation needed]
Inparticle physics, the most important hierarchy problem is the question that asks why theweak forceis 1024times as strong asgravity.[10]Both of these forces involve constants of nature, theFermi constantfor the weak force and theNewtonian constant of gravitationfor gravity. Furthermore, if theStandard Modelis used to calculate the quantum corrections to Fermi's constant, it appears that Fermi's constant is surprisingly large and is expected to be closer to Newton's constant unless there is a delicate cancellation between the bare value of Fermi's constant and the quantum corrections to it.
More technically, the question is why theHiggs bosonis so much lighter than thePlanck mass(or thegrand unification energy, or a heavy neutrino mass scale): one would expect that the large quantum contributions to the square of the Higgs boson mass would inevitably make the mass huge, comparable to the scale at which new physics appears unless there is an incrediblefine-tuningcancellation between the quadratic radiative corrections and the bare mass.
The problem cannot even be formulated in the strict context of the Standard Model, for the Higgs mass cannot be calculated. In a sense, the problem amounts to the worry that a future theory of fundamental particles, in which the Higgs boson mass will be calculable, should not have excessive fine-tunings.
There have been many proposed solutions by many experienced physicists.
Some physicists believe that one may solve the hierarchy problem viasupersymmetry. Supersymmetry can explain how a tiny Higgs mass can be protected from quantum corrections. Supersymmetry removes the power-law divergences of the radiative corrections to the Higgs mass and solves the hierarchy problem as long as the supersymmetric particles are light enough to satisfy theBarbieri–Giudicecriterion.[11]This still leaves open themu problem, however. The tenets of supersymmetry are being tested at theLHC, although no evidence has been found so far for supersymmetry.
Each particle that couples to the Higgs field has an associatedYukawa couplingλf{\textstyle \lambda _{f}}. The coupling with the Higgs field for fermions gives an interaction termLYukawa=−λfψ¯Hψ{\textstyle {\mathcal {L}}_{\mathrm {Yukawa} }=-\lambda _{f}{\bar {\psi }}H\psi }, withψ{\textstyle \psi }being theDirac fieldandH{\textstyle H}theHiggs field. Also, the mass of a fermion is proportional to its Yukawa coupling, meaning that the Higgs boson will couple most to the most massive particle. This means that the most significant corrections to the Higgs mass will originate from the heaviest particles, most prominently the top quark. By applying theFeynman rules, one gets the quantum corrections to the Higgs mass squared from a fermion to be:
ΔmH2=−|λf|28π2[ΛUV2+…].{\displaystyle \Delta m_{\rm {H}}^{2}=-{\frac {\left|\lambda _{f}\right|^{2}}{8\pi ^{2}}}[\Lambda _{\mathrm {UV} }^{2}+\dots ].}
TheΛUV{\textstyle \Lambda _{\mathrm {UV} }}is called the ultraviolet cutoff and is the scale up to which the Standard Model is valid. If we take this scale to be the Planck scale, then we have the quadratically diverging Lagrangian. However, suppose there existed two complex scalars (taken to be spin 0) such that:
λS=|λf|2{\displaystyle \lambda _{S}=\left|\lambda _{f}\right|^{2}}
(the couplings to the Higgs are exactly the same).
Then by the Feynman rules, the correction (from both scalars) is:
ΔmH2=2×λS16π2[ΛUV2+…].{\displaystyle \Delta m_{\rm {H}}^{2}=2\times {\frac {\lambda _{S}}{16\pi ^{2}}}[\Lambda _{\mathrm {UV} }^{2}+\dots ].}
(Note that the contribution here is positive. This is because of the spin-statistics theorem, which means that fermions will have a negative contribution and bosons a positive contribution. This fact is exploited.)
This gives a total contribution to the Higgs mass to be zero if we include both the fermionic and bosonic particles.Supersymmetryis an extension of this that creates 'superpartners' for all Standard Model particles.[12]
Without supersymmetry, a solution to the hierarchy problem has been proposed using just theStandard Model. The idea can be traced back to the fact that the term in the Higgs field that produces the uncontrolled quadratic correction upon renormalization is the quadratic one. If the Higgs field had no mass term, then no hierarchy problem arises. But by missing a quadratic term in the Higgs field, one must find a way to recover the breaking of electroweak symmetry through a non-null vacuum expectation value. This can be obtained using theWeinberg–Coleman mechanismwith terms in the Higgs potential arising from quantum corrections. Mass obtained in this way is far too small with respect to what is seen in accelerator facilities and so a conformal Standard Model needs more than one Higgs particle. This proposal has been put forward in 2006 byKrzysztof Antoni MeissnerandHermann Nicolai[13]and is currently under scrutiny. But if no further excitation is observed beyond the one seen so far atLHC, this model would have to be abandoned.
No experimental or observational evidence ofextra dimensionshas been officially reported. Analyses of results from theLarge Hadron Colliderseverely constrain theories withlarge extra dimensions.[14]However, extra dimensions could explain why the gravity force is so weak, and why the expansion of the universe is faster than expected.[15]
If we live in a 3+1 dimensional world, then we calculate the gravitational force viaGauss's law for gravity:
g(r)=−Gmerr2(1){\displaystyle \mathbf {g} (\mathbf {r} )=-Gm{\frac {\mathbf {e_{r}} }{r^{2}}}\qquad (1)}
which is simplyNewton's law of gravitation. Note that Newton's constantGcan be rewritten in terms of thePlanck mass.
G=ℏcMPl2{\displaystyle G={\frac {\hbar c}{M_{\mathrm {Pl} }^{2}}}}
If we extend this idea toδextra dimensions, then we get:
g(r)=−merMPl3+1+δ2+δr2+δ(2){\displaystyle \mathbf {g} (\mathbf {r} )=-m{\frac {\mathbf {e_{r}} }{M_{\mathrm {Pl} _{3+1+\delta }}^{2+\delta }r^{2+\delta }}}\qquad (2)}
whereMPl3+1+δ{\textstyle M_{\mathrm {Pl} _{3+1+\delta }}}is the3+1+δ{\textstyle \delta }-dimensional Planck mass. However, we are assuming that these extra dimensions are the same size as the normal 3+1 dimensions. Let us say that the extra dimensions are of sizen≪than normal dimensions. If we letr≪n, then we get (2). However, if we letr≫n, then we get our usual Newton's law. However, whenr≫n, the flux in the extra dimensions becomes a constant, because there is no extra room for gravitational flux to flow through. Thus the flux will be proportional tonδbecause this is the flux in the extra dimensions. The formula is:
g(r)=−merMPl3+1+δ2+δr2nδ−merMPl2r2=−merMPl3+1+δ2+δr2nδ{\displaystyle {\begin{aligned}\mathbf {g} (\mathbf {r} )&=-m{\frac {\mathbf {e_{r}} }{M_{\mathrm {Pl} _{3+1+\delta }}^{2+\delta }r^{2}n^{\delta }}}\\[2pt]-m{\frac {\mathbf {e_{r}} }{M_{\mathrm {Pl} }^{2}r^{2}}}&=-m{\frac {\mathbf {e_{r}} }{M_{\mathrm {Pl} _{3+1+\delta }}^{2+\delta }r^{2}n^{\delta }}}\end{aligned}}}
which gives:
1MPl2r2=1MPl3+1+δ2+δr2nδ⟹MPl2=MPl3+1+δ2+δnδ{\displaystyle {\begin{aligned}{\frac {1}{M_{\mathrm {Pl} }^{2}r^{2}}}&={\frac {1}{M_{\mathrm {Pl} _{3+1+\delta }}^{2+\delta }r^{2}n^{\delta }}}\\[2pt]\implies \quad M_{\mathrm {Pl} }^{2}&=M_{\mathrm {Pl} _{3+1+\delta }}^{2+\delta }n^{\delta }\end{aligned}}}
Thus the fundamental Planck mass (the extra-dimensional one) could actually be small, meaning that gravity is actually strong, but this must be compensated by the number of the extra dimensions and their size. Physically, this means that gravity is weak because there is a loss of flux to the extra dimensions.
This section is adapted fromQuantum Field Theory in a Nutshellby A. Zee.[16]
In 1998Nima Arkani-Hamed,Savas Dimopoulos, andGia Dvaliproposed theADD model, also known as the model withlarge extra dimensions, an alternative scenario to explain the weakness ofgravityrelative to the other forces.[17][18]This theory requires that the fields of theStandard Modelare confined to a four-dimensionalmembrane, while gravity propagates in several additional spatial dimensions that are large compared to thePlanck scale.[19]
In 1998–99Merab Gogberashvilipublished onarXiv(and subsequently in peer-reviewed journals) a number of articles where he showed that if the Universe is considered as a thin shell (a mathematicalsynonymfor "brane") expanding in 5-dimensional space then it is possible to obtain one scale for particle theory corresponding to the 5-dimensionalcosmological constantand Universe thickness, and thus to solve the hierarchy problem.[20][21][22]It was also shown that four-dimensionality of the Universe is the result of astabilityrequirement since the extra component of theEinstein field equationsgiving the localized solution formatterfields coincides with one of the conditions of stability.
Subsequently, there were proposed the closely relatedRandall–Sundrumscenarios which offered their solution to the hierarchy problem.
In 2019, a pair of researchers proposed thatIR/UV mixingresulting in the breakdown of theeffectivequantum field theorycould resolve the hierarchy problem.[23]In 2021, another group of researchers showed that UV/IR mixing could resolve the hierarchy problem in string theory.[24]
Inphysical cosmology, current observations in favor of anaccelerating universeimply the existence of a tiny, but nonzerocosmological constant. This problem, called thecosmological constant problem, is a hierarchy problem very similar to that of the Higgs boson mass problem, since the cosmological constant is also very sensitive to quantum corrections, but its calculation is complicated by the necessary involvement ofgeneral relativityin the problem. Proposed solutions to the cosmological constant problem include modifying and/or extending gravity,[25][26][27]adding matter with unvanishing pressure,[28]and UV/IR mixing in the Standard Model and gravity.[29][30]
Some physicists have resorted toanthropic reasoningto solve the cosmological constant problem,[31]but it is disputed whether such anthropic reasoning is scientific.[32][33]
|
https://en.wikipedia.org/wiki/Hierarchy_problem
|
Aholonis something that is simultaneously a whole in and of itself, as well as a part of a larger whole. In this way, a holon can be considered asubsystemwithin a largerhierarchicalsystem.[1]
The holon represents a way to overcome thedichotomy between parts and wholes, as well as a way to account for both theself-assertiveand the integrative tendencies oforganisms.[2]Holons are sometimes discussed in the context ofself-organizing holarchic open (SOHO) systems.[2][1]
The wordholon(Ancient Greek:ὅλον) is a combination of the Greekholos(ὅλος) meaning 'whole', with the suffix-onwhich denotes aparticleor part (as inprotonandneutron). Holons are self-reliant units that possess a degree of independence and can handle contingencies without asking higher authorities for instructions (i.e., they have a degree ofautonomy). These holons are also simultaneously subject to control from one or more of these higher authorities. The first property ensures that holons are stable forms that are able to withstand disturbances, while the latter property signifies that they are intermediate forms, providing a context for the proper functionality for the larger whole.
The termholonwas coined byArthur KoestlerinThe Ghost in the Machine(1967), though Koestler first articulated the concept inThe Act of Creation(1964), in which he refers to the relationship between the searches forsubjectiveandobjectiveknowledge:
Einstein's space is no closer to reality thanVan Gogh'ssky. The glory of science is not in a truth more absolute than the truth ofBachorTolstoy, but in the act of creation itself. The scientist's discoveries impose his own order on chaos, as the composer or painter imposes his; an order that always refers to limited aspects of reality, and is based on the observer's frame of reference, which differs from period to period as aRembrantnude differs from a nude byManet.[3]
Koestler would finally propose the termholoninThe Ghost in the Machine(1967), using it to describe natural organisms as composed of semi-autonomous sub-wholes (or, parts) that are linked in a form of hierarchy, aholarchy, to form a whole.[2][4][5]The title of the book itself points to the notion that the entire 'machine' of life and of theuniverseitself is ever-evolving toward more and more complex states, as if a ghost were operating the machine.[6]
The first observation was influenced by a story told to him byHerbert A. Simon—the 'parableof the two watchmakers'—in which Simon concludes thatcomplex systemsevolve from simple systems much more rapidly when there are stable intermediate forms present in theevolutionary processcompared to when they are not present:[7]
There once were two watchmakers, named Bios and Mekhos, who made very fine watches. The phones in their workshops rang frequently; new customers were constantly calling them. However, Bios prospered while Mekhos became poorer and poorer. In the end, Mekhos lost his shop and worked as a mechanic for Bios. What was the reason behind this?
The watches consisted of about 1000 parts each. The watches that Mekhos made were designed such that, when he had to put down a partly assembled watch (for instance, to answer the phone), it immediately fell into pieces and had to be completely reassembled from the basic elements. On the other hand Bios designed his watches so that he could put together subassemblies of about ten components each. Ten of these subassemblies could be put together to make a larger sub-assembly. Finally, ten of the larger subassemblies constituted the whole watch. When Bios had to put his watches down to attend to some interruption they did not break up into their elemental parts but only into their sub-assemblies.
Now, the watchmakers were each disturbed at the same rate of once per hundred assembly operations. However, due to their different assembly methods, it took Mekhos four thousand times longer than Bios to complete a single watch.
The second observation was made by Koestler himself in his analysis of hierarchies and stable intermediate forms in non-livingmatter(atomicandmolecularstructure),living organisms, andsocial organizations.
|
https://en.wikipedia.org/wiki/Holarchy#Different_meanings
|
Inmoral philosophy,instrumental and intrinsic valueare the distinction between what is ameans to an endand what is as anend in itself.[1]Things are deemed to haveinstrumental value(orextrinsic value[2]) if they help one achieve a particular end;intrinsic values, by contrast, are understood to be desirable in and of themselves. A tool or appliance, such as a hammer or washing machine, has instrumental value because it helps one pound in a nail or clean clothes, respectively. Happiness and pleasure are typically considered to have intrinsic value insofar as askingwhysomeone would want them makes little sense: they are desirable for their own sake irrespective of their possible instrumental value. The classic namesinstrumentalandintrinsicwere coined by sociologistMax Weber, who spent years studying good meanings people assigned to their actions and beliefs.
TheOxford Handbook of Value Theoryprovides three modern definitions of intrinsic and instrumental value:
When people judge efficient means and legitimate ends at the same time, both can be considered as good. However, when ends are judged separately from means, it may result in a conflict: what works may not be right; what is right may not work. Separating the criteria contaminates reasoning about the good. PhilosopherJohn Deweyargued that separating criteria for good ends from those for good means necessarily contaminates recognition of efficient and legitimate patterns of behavior. Economist J. Fagg Foster explained why only instrumental value is capable of correlating good ends with good means. PhilosopherJacques Ellulargued that instrumental value has become completely contaminated by inhuman technological consequences, and must be subordinated to intrinsic supernatural value. PhilosopherAnjan Chakravarttyargued that instrumental value is only legitimate when it produces good scientific theories compatible with the intrinsic truth of mind-independent reality.
The wordvalueis ambiguous in that it is both averband anoun, as well as denoting both a criterion of judgment itself and the result of applying a criterion.[3][4]: 37–44To reduce ambiguity, throughout this article the nounvaluenames a criterion of judgment, as opposed tovaluationwhich is an object that is judged valuable. Thepluralvaluesidentifies collections of valuations, without identifying the criterion applied.
Immanuel Kantis famously quoted as saying:
So act as to treat humanity, whether in thine own person or in that of any other, in every case as an end withal, never as means only.[5]
Here, Kant considers both instrumental and intrinsic value, although not calling them by those names.
The classic namesinstrumentalandintrinsicwere coined by sociologistMax Weber, who spent years studying good meanings people assigned to their actions and beliefs. According to Weber, "[s]ocial action, like all action, may be" judged as:[6]: 24–5
Weber's original definitions also include a comment showing his doubt that conditionally efficient means can achieve unconditionally legitimate ends:[6]: 399–400
[T]he more the value to which action is oriented is elevated to the status of an absolute [intrinsic] value, the more "irrational" in this [instrumental] sense the corresponding action is. For the more unconditionally the actor devotes himself to this value for its own sake…the less he is influenced by considerations of the [conditional] consequences of his action.
John Deweythought that belief in intrinsic value was a mistake. Although the application of instrumental value is easily contaminated, it is the only means humans have to coordinate group behaviour efficiently and legitimately.
Every social transaction has good or bad consequences depending on prevailing conditions, which may or may not be satisfied. Continuous reasoning adjusts institutions to keep them working on the right track as conditions change. Changing conditions demand changing judgments to maintain efficient and legitimate correlation of behavior.[7]
For Dewey, "restoring integration and cooperation between man'sbeliefsabout the world in which he lives and his beliefs about the values [valuations] and purposes that should direct his conduct is the deepest problem of modern life."[8]: 255Moreover, a "culture which permits science to destroy traditional values [valuations] but which distrusts its power to create new ones is a culture which is destroying itself."[9]
Dewey agreed withMax Weberthat people talk as if they apply instrumental and intrinsic criteria. He also agreed with Weber's observation that intrinsic value is problematic in that it ignores the relationship between context and consequences of beliefs and behaviors. Both men questioned how anything valued intrinsically "for its own sake" can have operationally efficient consequences. However, Dewey rejects the common belief—shared by Weber—that supernatural intrinsic value is necessary to show humans what is permanently "right." He argues that both efficient and legitimate qualities must be discovered in daily life:
Man who lives in a world of hazards…has sought to attain [security] in two ways. One of them began with an attempt to propitiate the [intrinsic] powers which environ him and determine his destiny. It expressed itself in supplication, sacrifice, ceremonial rite and magical cult.… The other course is to invent [instrumental] arts and by their means turn the powers of nature to account.…[8]: 3[F]or over two thousand years, the…most influential and authoritatively orthodox tradition…has been devoted to the problem of a purely cognitive certification (perhaps by revelation, perhaps by intuition, perhaps by reason) of the antecedent immutable reality of truth, beauty, and goodness.… The crisis in contemporary culture, the confusions and conflicts in it, arise from a division of authority. Scientific [instrumental] inquiry seems to tell one thing, and traditional beliefs [intrinsic valuations] about ends and ideals that have authority over conduct tell us something quite different.… As long as the notion persists that knowledge is a disclosure of [intrinsic] reality…prior to and independent of knowing, and that knowing is independent of a purpose to control the quality of experienced objects, the failure of natural science to disclose significant values [valuations] in its objects will come as a shock.[8]: 43–4
Finding no evidence of "antecedent immutable reality of truth, beauty, and goodness," Dewey argues that both efficient and legitimate goods are discovered in the continuity of human experience:[8]: 114, 172–3, 197
Dewey's ethics replaces the goal of identifying an ultimate end or supreme principle that can serve as a criterion of ethical evaluation with the goal of identifying a method for improving our value judgments. Dewey argued that ethical inquiry is of a piece with empirical inquiry more generally.… This pragmatic approach requires that we locate the conditions of warrant for our value judgments in human conduct itself, not in any a priori fixed reference point outside of conduct, such as in God's commands, Platonic Forms, pure reason, or "nature," considered as giving humans a fixed telos [intrinsic end].[10]
Philosophers label a "fixed reference point outside of conduct' a "natural kind," and presume it to have eternal existence knowable in itself without being experienced.Natural kindsare intrinsic valuations presumed to be "mind-independent" and "theory-independent."[11]
Dewey grants the existence of "reality" apart from human experience, but denied that it is structured as intrinsically real natural kinds.[8]: 122, 196Instead, he sees reality as functional continuity of ways-of-acting, rather than as interaction among pre-structured intrinsic kinds. Humans may intuit static kinds and qualities, but such private experience cannot warrant inferences or valuations about mind-independent reality. Reports or maps of perceptions or intuitions are never equivalent to territories mapped.[12]
People reason daily about what they ought to do and how they ought to do it. Inductively, they discover sequences of efficient means that achieve consequences. Once an end is reached—a problem solved—reasoning turns to new conditions of means-end relations. Valuations that ignore consequence-determining conditions cannot coordinate behavior to solve real problems; they contaminate rationality.
Value judgments have the form: if one acted in a particular way (or valued this object), then certain consequences would ensue, which would be valued. The difference between an apparent and a real good [means or end], between an unreflectively and a reflectively valued good, is captured by its value [valuation of goodness] not just as immediately experienced in isolation, but in view of its wider consequences and how they are valued.… So viewed, value judgments are tools for discovering how to live a better life, just as scientific hypotheses are tools for uncovering new information about the world.[10]
In brief, Dewey rejects the traditional belief that judging things asgood in themselves, apart from existingmeans-endrelations, can be rational. The sole rational criterion is instrumental value. Each valuation is conditional but, cumulatively, all are developmental—and therefore socially-legitimate solutions of problems. Competent instrumental valuations treat the "function of consequences as necessary tests of the validity of propositions,providedthese consequences are operationally instituted and are such as to resolve the specific problems evoking the operations."[13][14]: 29–31
John Fagg Foster madeJohn Dewey's rejection of intrinsic value more operational by showing that its competent use rejects the legitimacy ofutilitarianends—satisfaction of whatever ends individuals adopt. It requires recognizing developmental sequences of means and ends.[15][16][17]: 40–8
Utilitarians hold that individual wants cannot be rationally justified; they are intrinsically worthy subjective valuations and cannot be judged instrumentally. This belief supports philosophers who hold that facts ("what is") can serve as instrumental means for achieving ends, but cannot authorize ends ("what ought to be"). Thisfact-value distinctioncreates what philosophers label theis-ought problem: wants are intrinsically fact-free, good in themselves; whereas efficient tools are valuation-free, usable for good or bad ends.[17]: 60In modern North-American culture, this utilitarian belief supports thelibertarianassertion that every individual's intrinsic right to satisfy wants makes it illegitimate for anyone—but especially governments—to tell people what they ought to do.[18]
Foster finds that theis-ought problemis a useful place to attack the irrational separation of good means from good ends. He argues thatwant-satisfaction("what ought to be") cannot serve as an intrinsic moral compass because 'wants' are themselves consequences of transient conditions.
[T]he things people want are a function of their social experience, and that is carried on through structural institutions that specify their activities and attitudes. Thus the pattern of people's wants takes visible form partly as a result of the pattern of the institutional structure through which they participate in the economic process. As we have seen, to say that an economic problem exists is to say that part of the particular patterns of human relationships has ceased or failed to provide the effective participation of its members. In so saying, we are necessarily in the position of asserting that the instrumental efficiency of the economic process is the criterion of judgment in terms of which, and only in terms of which, we may resolve economic problems.[19]
Since 'wants' are shaped by social conditions, they must be judged instrumentally; they arise in problematic situations when habitual patterns of behavior fail to maintain instrumental correlations.[17]: 27
Foster uses with homely examples to support his thesis that problematic situations ("what is") contain the means for judging legitimate ends ("what ought to be"). Rational efficient means achieve rational developmental ends. Consider the problem all infants face learning to walk. They spontaneously recognize that walking is more efficient differently to crawling—an instrumental valuation of a desirable end. They learn to walk by repeatedly moving and balancing, judging the efficiency with which these means achieve their instrumental goal. When they master this new way-of-acting, they experience great satisfaction, but satisfaction is never their end-in-view.[20]
To guard against contamination of instrumental value by judging means and ends independently, Foster revised his definition to embrace both.
Instrumental value is the criterion of judgment which seeks instrumentally-efficient means that "work" to achieve developmentally-continuous ends. This definition stresses the condition that instrumental success is never short term; it must not lead down a dead-end street. The same point is made by the currently popular concern for sustainability—a synonym for instrumental value.[21]
Dewey's and Foster's argument that there is no intrinsic alternative to instrumental value continues to be ignored rather than refuted. Scholars continue to accept the possibility and necessity of knowing "what ought to be" independently of transient conditions that determine actual consequences of every action.Jacques EllulandAnjan Chakravarttywere prominent exponents of the truth and reality of intrinsic value as constraint on relativistic instrumental value.
Jacques Ellulmade scholarly contributions to many fields, but his American reputation grew out of his criticism of the autonomous authority of instrumental value, the criterion thatJohn Deweyand J. Fagg Foster found to be the core of human rationality. He specifically criticized the valuations central to Dewey's and Foster's thesis: evolving instrumental technology.
His principal work, published in 1954, bore the French titleLa techniqueand tackles the problem that Dewey addressed in 1929: a culture in which the authority of evolving technology destroys traditional valuations without creating legitimate new ones. Both men agree that conditionally-efficient valuations ("what is") become irrational when viewed as unconditionally efficient in themselves ("what ought to be"). However, while Dewey argues that contaminated instrumental valuations can be self-correcting, Ellul concludes that technology has become intrinsically destructive. The only escape from this evil is to restore authority to unconditional sacred valuations:[22]: 143
Nothing belongs any longer to the realm of the gods or the supernatural. The individual who lives in the technical milieu knows very well that there is nothing spiritual anywhere. But man cannot live without the [intrinsic] sacred. He therefore transfers his sense of the sacred to the very thing which has destroyed its former object: to technique itself.
The English edition ofLa techniquewas published in 1964, titledThe Technological Society, and quickly entered ongoing disputes in the United States over the responsibility of instrumental value for destructive social consequences. The translator[who?]ofTechnological Societysummarizes Ellul's thesis:[23]
Technological Societyis a description of the way in which an autonomous [instrumental] technology is in process of taking over the traditional values [intrinsic valuations] of every society without exception, subverting and suppressing those values to produce at last a monolithic world culture in which all non-technological difference and variety is mere appearance.
Ellul opensThe Technological Societyby asserting that instrumental efficiency is no longer a conditional criterion. It has become autonomous and absolute:[22]: xxxvi
The termtechnique, as I use it, does not mean machines, technology, or this or that procedure for attaining an end. In our technological society, technique is thetotality of methods rationally arrived at and having absolute efficiency(for a given stage of development) ineveryfield of human activity.
He blames instrumental valuations for destroying intrinsic meanings of human life: "Think of ourdehumanizedfactories, our unsatisfied senses, our working women, our estrangement from nature. Life in such an environment has no meaning."[22]: 4–5While Weber had labeled the discrediting of intrinsic valuations asdisenchantment, Ellul came to label it as "terrorism."[24]: 384, 19He dates its domination to the 1800s, when centuries-old handicraft techniques were massively eliminated by inhuman industry.
When, in the 19th century, society began to elaborate an exclusively rational technique which acknowledged only considerations of efficiency, it was felt that not only the traditions but the deepest instincts of humankind had been violated.[22]: 73Culture is necessarily humanistic or it does not exist at all.… [I]t answers questions about the meaning of life, the possibility of reunion with ultimate being, the attempt to overcome human finitude, and all other questions that they have to ask and handle. But technique cannot deal with such things.… Culture exists only if it raises the question of meaning and values [valuations].… Technique is not at all concerned about the meaning of life, and it rejects any relation to values [intrinsic valuations].[24]: 147–8
Ellul's core accusation is that instrumental efficiency has become absolute, i.e., agood-in-itself;[22]: 83it wraps societies in a new technologicalmilieuwith six intrinsically inhuman characteristics:[4]: 22
Philosophers Tiles and Oberdiek (1995) find Ellul's characterization of instrumental value inaccurate.[4]: 22–31They criticize him foranthropomorphizinganddemonizinginstrumental value. They counter this by examining the moral reasoning of scientists whose work led to nuclear weapons: those scientists demonstrated the capacity of instrumental judgments to provide them with a moral compass to judge nuclear technology; they were morally responsible without intrinsic rules. Tiles and Oberdiek's conclusion coincides with that of Dewey and Foster: instrumental value, when competently applied, is self-correcting and provides humans with a developmental moral compass.
For although we have defended general principles of the moral responsibilities of professional people, it would be foolish and wrongheaded to suggest codified [intrinsic] rules. It would be foolish because concrete cases are more complex and nuanced than any code could capture; it would be wrongheaded because it would suggest that our sense of moral responsibility can be fully captured by a code.[4]: 193In fact, as we have seen in many instances, technology simply allows us to go on doing stupid things in clever ways. The questions that technology cannot solve, although it will always frame and condition the answers, are "What should we be trying to do? What kind of lives should we, as human beings, be seeking to live? And can this kind of life be pursued without exploiting others? But until we can at least propose [instrumental] answers to those questions we cannot really begin to do sensible things in the clever ways that technology might permit.[4]: 197
Anjan Chakravarttycame indirectly to question the autonomous authority of instrumental value. He viewed it as a foil for the currently dominant philosophical school labeled "scientific realism," with which he identifies. In 2007, he published a work defending the ultimate authority of intrinsic valuations to which realists are committed. He links the pragmatic instrumental criterion to discreditedanti-realistempiricistschools includinglogical positivismandinstrumentalism.
Chakravartty began his study with rough characterizations of realist and anti-realist valuations of theories. Anti-realists believe "that theories are merely instruments for predicting observable phenomena or systematizing observation reports;" they assert that theories can never report or prescribe truth or reality "in itself." By contrast, scientific realists believe that theories can "correctly describe both observable and unobservable parts of the world."[25]: xi, 10Well-confirmed theories—"what ought to be" as the end of reasoning—are more than tools; they are maps of intrinsic properties of an unobservable and unconditional territory—"what is" as natural-but-metaphysical real kinds.[25]: xiii, 33, 149
Chakravartty treats criteria of judgment as ungrounded opinion, but admits that realists apply the instrumental criterion to judge theories that "work."[25]: 25He restricts such criterion's scope, claiming that every instrumental judgment isinductive,heuristic,accidental. Later experience might confirm a singular judgment only if it proves to have universal validity, meaning it possesses "detection properties" ofnatural kinds.[25]: 231This inference is his fundamental ground for believing in intrinsic value.
He commits modern realists to threemetaphysicalvaluations or intrinsic kinds of knowledge of truth. Competent realists affirm that natural kinds exist in a mind-independent territory possessing 1) meaningful and 2) mappable intrinsic properties.
Ontologically, scientific realism is committed to the existence of a mind-independent world or reality. Arealist semanticsimplies that the theoretical claims [valuations] about this reality have truth values, and should be construed literally.… Finally, theepistemologicalcommitment is to the idea that these theoretical claims give us knowledge of the world. That is, predictively successful (mature, non-ad hoc) theories, taken literally as describing the nature of a mind-independent reality are (approximately) true.[25]: 9
He labels these intrinsic valuations assemi-realist, meaning they are currently the most accurate theoretical descriptions of mind-independent natural kinds. He finds these carefully qualified statements necessary to replace earlier realist claims of intrinsic reality discredited by advancing instrumental valuations.
Science has destroyed for many people the supernatural intrinsic value embraced by Weber and Ellul. But Chakravartty defended intrinsic valuations as necessary elements of all science—belief in unobservable continuities. He advances the thesis ofsemi-realism, according to which well-tested theories are good maps of natural kinds, as confirmed by their instrumental success; their predictive success means they conform to mind-independent, unconditional reality.
Causal properties are the fulcrum of semirealism. Their [intrinsic] relations compose the concrete structures that are the primary subject matters of a tenable scientific realism. They regularly cohere to form interesting units, and these groupings make up the particulars investigated by the sciences and described by scientific theories.[25]: 119Scientific theories describe [intrinsic] causal properties, concrete structures, and particulars such as objects, events, and processes. Semirealism maintains that under certain conditions it is reasonable for realists to believe that the best of these descriptions tell us not merely about things that can be experienced with the unaided senses, but also about some of the unobservable things underlying them.[25]: 151
Chakravartty argues that these semirealist valuations legitimize scientific theorizing about pragmatic kinds. The fact that theoretical kinds are frequently replaced does not mean that mind-independent reality is changing, but simply that theoretical maps are approximating intrinsic reality.
The primary motivation for thinking that there are such things as natural kinds is the idea that carving nature according to its own divisions yields groups of objects that are capable of supporting successful inductive generalizations and prediction. So the story goes, one's recognition of natural categories facilitates these practices, and thus furnishes an excellent explanation for their success.[25]: 151The moral here is that however realists choose to construct particulars out of instances of properties, they do so on the basis of a belief in the [mind-independent] existence of those properties. That is the bedrock of realism. Property instances lend themselves to different forms of packaging [instrumental valuations], but as a feature of scientific description, this does not compromise realism with respect to the relevant [intrinsic] packages.[25]: 81
In sum, Chakravartty argues that contingent instrumental valuations are warranted only as they approximate unchanging intrinsic valuations. Scholars continue to perfect their explanations of intrinsic value, as they deny the developmental continuity of applications of instrumental value.
Abstraction is a process in which only some of the potentially many relevant factors present in [unobservable] reality are represented in a model or description with some aspect of the world, such as the nature or behavior of a specific object or process. ... Pragmatic constraints such as these play a role in shaping how scientific investigations are conducted, and together which and how many potentially relevant factors [intrinsic kinds] are incorporated into models and descriptions during the process of abstraction. The role of pragmatic constraints, however, does not undermine the idea that putative representations of factors composing abstract models can be thought to have counterparts in the [mind-independent] world.[25]: 191
Realist intrinsic value as proposed by Chakravartty, is widely endorsed in modern scientific circles, while the supernatural intrinsic value endorsed byMax WeberandJacques Ellulmaintains its popularity throughout the world. Doubters about the reality of instrumental and intrinsic value are few.
|
https://en.wikipedia.org/wiki/Instrumental_value
|
Layerorlayeredmay refer to:
|
https://en.wikipedia.org/wiki/Layer_(disambiguation)
|
Multilevel models[a]arestatistical modelsofparametersthat vary at more than one level.[1]An example could be a model of student performance that contains measures for individual students as well as measures for classrooms within which the students are grouped. These models can be seen as generalizations oflinear models(in particular,linear regression), although they can also extend to non-linear models. These models became much more popular after sufficient computing power and software became available.[1]
Multilevel models are particularly appropriate for research designs where data for participants are organized at more than one level (i.e.,nested data).[2]The units of analysis are usually individuals (at a lower level) who are nested within contextual/aggregate units (at a higher level).[3]While the lowest level of data in multilevel models is usually an individual, repeated measurements of individuals may also be examined.[2][4]As such, multilevel models provide an alternative type of analysis for univariate ormultivariate analysisofrepeated measures. Individual differences ingrowth curvesmay be examined.[2]Furthermore, multilevel models can be used as an alternative toANCOVA, where scores on the dependent variable are adjusted for covariates (e.g. individual differences) before testing treatment differences.[5]Multilevel models are able to analyze these experiments without the assumptions of homogeneity-of-regression slopes that is required by ANCOVA.[2]
Multilevel models can be used on data with many levels, although 2-level models are the most common and the rest of this article deals only with these. The dependent variable must be examined at the lowest level of analysis.[1]
When there is a single level 1 independent variable, the level 1 model is
Yij=β0j+β1jXij+eij{\displaystyle Y_{ij}=\beta _{0j}+\beta _{1j}X_{ij}+e_{ij}}.
eij∼N(0,σ12){\displaystyle e_{ij}\sim {\mathcal {N}}(0,\sigma _{1}^{2})}
At Level 1, both the intercepts and slopes in the groups can be either fixed (meaning that all groups have the same values, although in the real world this would be a rare occurrence), non-randomly varying (meaning that the intercepts and/or slopes are predictable from an independent variable at Level 2), or randomly varying (meaning that the intercepts and/or slopes are different in the different groups, and that each have their own overall mean and variance).[2][4]
When there are multiple level 1 independent variables, the model can be expanded by substituting vectors and matrices in the equation.
When the relationship between the responseYij{\displaystyle Y_{ij}}and predictorXij{\displaystyle X_{ij}}can not be described by the linear relationship, then one can find some non linear functional relationship between the response and predictor, and extend the model tononlinear mixed-effects model. For example, when the responseYij{\displaystyle Y_{ij}}is the cumulative infection trajectory of thei{\displaystyle i}-th country, andXij{\displaystyle X_{ij}}represents thej{\displaystyle j}-th time points, then the ordered pair(Xij,Yij){\displaystyle (X_{ij},Y_{ij})}for each country may show a shape similar tologistic function.[6][7]
The dependent variables are the intercepts and the slopes for the independent variables at Level 1 in the groups of Level 2.
u0j∼N(0,σ22){\displaystyle u_{0j}\sim {\mathcal {N}}(0,\sigma _{2}^{2})}
u1j∼N(0,σ32){\displaystyle u_{1j}\sim {\mathcal {N}}(0,\sigma _{3}^{2})}
β0j=γ00+γ01wj+u0j{\displaystyle \beta _{0j}=\gamma _{00}+\gamma _{01}w_{j}+u_{0j}}
β1j=γ10+γ11wj+u1j{\displaystyle \beta _{1j}=\gamma _{10}+\gamma _{11}w_{j}+u_{1j}}
Before conducting a multilevel model analysis, a researcher must decide on several aspects, including which predictors are to be included in the analysis, if any. Second, the researcher must decide whether parameter values (i.e., the elements that will be estimated) will be fixed or random.[2][5][4]Fixed parameters are composed of a constant over all the groups, whereas a random parameter has a different value for each of the groups.[4]Additionally, the researcher must decide whether to employ a maximum likelihood estimation or a restricted maximum likelihood estimation type.[2]
A random intercepts model is a model in which intercepts are allowed to vary, and therefore, the scores on the dependent variable for each individual observation are predicted by the intercept that varies across groups.[5][8][4]This model assumes that slopes are fixed (the same across different contexts). In addition, this model provides information aboutintraclass correlations, which are helpful in determining whether multilevel models are required in the first place.[2]
A random slopes model is a model in which slopes are allowed to vary according to a correlation matrix, and therefore, the slopes are different across grouping variable such as time or individuals. This model assumes that intercepts are fixed (the same across different contexts).[5]
A model that includes both random intercepts and random slopes is likely the most realistic type of model, although it is also the most complex. In this model, both intercepts and slopes are allowed to vary across groups, meaning that they are different in different contexts.[5]
In order to conduct a multilevel model analysis, one would start with fixed coefficients (slopes and intercepts). One aspect would be allowed to vary at a time (that is, would be changed), and compared with the previous model in order to assess better model fit.[1]There are three different questions that a researcher would ask in assessing a model. First, is it a good model? Second, is a more complex model better? Third, what contribution do individual predictors make to the model?
In order to assess models, different model fit statistics would be examined.[2]One such statistic is the chi-squarelikelihood-ratio test, which assesses the difference between models. The likelihood-ratio test can be employed for model building in general, for examining what happens when effects in a model are allowed to vary, and when testing a dummy-coded categorical variable as a single effect.[2]However, the test can only be used when models arenested(meaning that a more complex model includes all of the effects of a simpler model). When testing non-nested models, comparisons between models can be made using theAkaike information criterion(AIC) or theBayesian information criterion(BIC), among others.[1][2][5]See furtherModel selection.
Multilevel models have the same assumptions as other major general linear models (e.g.,ANOVA,regression), but some of the assumptions are modified for the hierarchical nature of the design (i.e., nested data).
The assumption of linearity states that there is a rectilinear (straight-line, as opposed to non-linear or U-shaped) relationship between variables.[9]However, the model can be extended to nonlinear relationships.[10]Particularly, when the mean part of the level 1 regression equation is replaced with a non-linear parametric function, then such a model framework is widely called thenonlinear mixed-effects model.[7]
The assumption of normality states that the error terms at every level of the model are normally distributed.[9][disputed–discuss]However, most statistical software allows one to specify different distributions for the variance terms, such as a Poisson, binomial, logistic. The multilevel modelling approach can be used for all forms of Generalized Linear models.
The assumption ofhomoscedasticity, also known as homogeneity of variance, assumes equality of population variances.[9]However, different variance-correlation matrix can be specified to account for this, and the heterogeneity of variance can itself be modeled.
Independence is an assumption of general linear models, which states that cases are random samples from the population and that scores on the dependent variable are independent of each other.[9]One of the main purposes of multilevel models is to deal with cases where the assumption of independence is violated; multilevel models do, however, assume that 1) the level 1 and level 2 residuals are uncorrelated and 2) The errors (as measured by the residuals) at the highest level are uncorrelated.[11]
The regressors must not correlate with the random effects,u0j{\displaystyle u_{0j}}. This assumption is testable but often ignored, rendering the estimator inconsistent.[12]If this assumption is violated, the random-effect must be modeled explicitly in the fixed part of the model, either by using dummy variables or including cluster means of allXij{\displaystyle X_{ij}}regressors.[12][13][14][15]This assumption is probably the most important assumption the estimator makes, but one that is misunderstood by most applied researchers using these types of models.[12]
The type of statistical tests that are employed in multilevel models depend on whether one is examining fixed effects or variance components. When examining fixed effects, the tests are compared with the standard error of the fixed effect, which results in aZ-test.[5]At-testcan also be computed. When computing a t-test, it is important to keep in mind the degrees of freedom, which will depend on the level of the predictor (e.g., level 1 predictor or level 2 predictor).[5]For a level 1 predictor, the degrees of freedom are based on the number of level 1 predictors, the number of groups and the number of individual observations. For a level 2 predictor, the degrees of freedom are based on the number of level 2 predictors and the number of groups.[5]
Statistical power for multilevel models differs depending on whether it is level 1 or level 2 effects that are being examined. Power for level 1 effects is dependent upon the number of individual observations, whereas the power for level 2 effects is dependent upon the number of groups.[16]To conduct research with sufficient power, large sample sizes are required in multilevel models. However, the number of individual observations in groups is not as important as the number of groups in a study. In order to detect cross-level interactions, given that the group sizes are not too small, recommendations have been made that at least 20 groups are needed,[16]although many fewer can be used if one is only interested in inference on the fixed effects and the random effects are control, or "nuisance", variables.[4]The issue of statistical power in multilevel models is complicated by the fact that power varies as a function of effect size and intraclass correlations, it differs for fixed effects versus random effects, and it changes depending on the number of groups and the number of individual observations per group.[16]
The concept of level is the keystone of this approach. In aneducational researchexample, the levels for a 2-level model might be
However, if one were studying multiple schools and multiple school districts, a 4-level model could include
The researcher must establish for eachvariablethe level at which it was measured. In this example "test score" might be measured at pupil level, "teacher experience" at class level, "school funding" at school level, and "urban" at district level.
As a simple example, consider a basic linear regression model that predicts income as a function of age, class, gender and race. It might then be observed that income levels also vary depending on the city and state of residence. A simple way to incorporate this into the regression model would be to add an additionalindependentcategorical variableto account for the location (i.e. a set of additional binary predictors and associated regression coefficients, one per location). This would have the effect of shifting the mean income up or down—but it would still assume, for example, that the effect of race and gender on income is the same everywhere. In reality, this is unlikely to be the case—different local laws, different retirement policies, differences in level of racial prejudice, etc. are likely to cause all of the predictors to have different sorts of effects in different locales.
In other words, a simple linear regression model might, for example, predict that a given randomly sampled person inSeattlewould have an average yearly income $10,000 higher than a similar person inMobile, Alabama. However, it would also predict, for example, that a white person might have an average income $7,000 above a black person, and a 65-year-old might have an income $3,000 below a 45-year-old, in both cases regardless of location. A multilevel model, however, would allow for different regression coefficients for each predictor in each location. Essentially, it would assume that people in a given location have correlated incomes generated by a single set of regression coefficients, whereas people in another location have incomes generated by a different set of coefficients. Meanwhile, the coefficients themselves are assumed to be correlated and generated from a single set ofhyperparameters. Additional levels are possible: For example, people might be grouped by cities, and the city-level regression coefficients grouped by state, and the state-level coefficients generated from a single hyper-hyperparameter.
Multilevel models are a subclass ofhierarchical Bayesian models, which are general models with multiple levels ofrandom variablesand arbitrary relationships among the different variables. Multilevel analysis has been extended to include multilevelstructural equation modeling, multilevellatent class modeling, and other more general models.
Multilevel models have been used in education research or geographical research, to estimate separately the variance between pupils within the same school, and the variance between schools. In psychological applications, the multiple levels are items in an instrument, individuals, and families. In sociological applications, multilevel models are used to examine individuals embedded within regions or countries. Inorganizational psychologyresearch, data from individuals must often be nested within teams or other functional units. They are often used in ecological research as well under the more general termmixed models.[4]
Different covariables may be relevant on different levels. They can be used for longitudinal studies, as with growth studies, to separate changes within one individual and differences between individuals.
Cross-level interactions may also be of substantive interest; for example, when a slope is allowed to vary randomly, a level-2 predictor may be included in the slope formula for the level-1 covariate. For example, one may estimate the interaction of race and neighborhood to obtain an estimate of the interaction between an individual's characteristics and the social context.
There are several alternative ways of analyzing hierarchical data, although most of them have some problems. First, traditional statistical techniques can be used. One could disaggregate higher-order variables to the individual level, and thus conduct an analysis on this individual level (for example, assign class variables to the individual level). The problem with this approach is that it would violate the assumption of independence, and thus could bias our results. This is known as atomistic fallacy.[17]Another way to analyze the data using traditional statistical approaches is to aggregate individual level variables to higher-order variables and then to conduct an analysis on this higher level. The problem with this approach is that it discards all within-group information (because it takes the average of the individual level variables). As much as 80–90% of the variance could be wasted, and the relationship between aggregated variables is inflated, and thus distorted.[18]This is known asecological fallacy, and statistically, this type of analysis results in decreased power in addition to the loss of information.[2]
Another way to analyze hierarchical data would be through a random-coefficients model. This model assumes that each group has a different regression model—with its own intercept and slope.[5]Because groups are sampled, the model assumes that the intercepts and slopes are also randomly sampled from a population of group intercepts and slopes. This allows for an analysis in which one can assume that slopes are fixed but intercepts are allowed to vary.[5]However this presents a problem, as individual components are independent but group components are independent between groups, but dependent within groups. This also allows for an analysis in which the slopes are random; however, the correlations of the error terms (disturbances) are dependent on the values of the individual-level variables.[5]Thus, the problem with using a random-coefficients model in order to analyze hierarchical data is that it is still not possible to incorporate higher order variables.
Multilevel models have two error terms, which are also known as disturbances. The individual components are all independent, but there are also group components, which are independent between groups but correlated within groups. However, variance components can differ, as some groups are more homogeneous than others.[18]
Multilevel modeling is frequently used in diverse applications and it can be formulated by the Bayesian framework. Particularly, Bayesian nonlinear mixed-effects models have recently received significant attention. A basic version of the Bayesian nonlinear mixed-effects models is represented as the following three-stage:
Stage 1: Individual-Level Model
yij=f(tij;θ1i,θ2i,…,θli,…,θKi)+ϵij,spacerϵij∼N(0,σ2),spaceri=1,…,N,j=1,…,Mi.{\displaystyle {\begin{aligned}&{y}_{ij}=f(t_{ij};\theta _{1i},\theta _{2i},\ldots ,\theta _{li},\ldots ,\theta _{Ki})+\epsilon _{ij},\\{\phantom {spacer}}\\&\epsilon _{ij}\sim N(0,\sigma ^{2}),\\{\phantom {spacer}}\\&i=1,\ldots ,N,\,j=1,\ldots ,M_{i}.\end{aligned}}}
Stage 2: Population Model
θli=αl+∑b=1Pβlbxib+ηli,spacerηli∼N(0,ωl2),spaceri=1,…,N,l=1,…,K.{\displaystyle {\begin{aligned}&\theta _{li}=\alpha _{l}+\sum _{b=1}^{P}\beta _{lb}x_{ib}+\eta _{li},\\{\phantom {spacer}}\\&\eta _{li}\sim N(0,\omega _{l}^{2}),\\{\phantom {spacer}}\\&i=1,\ldots ,N,\,l=1,\ldots ,K.\end{aligned}}}
Stage 3: Prior
σ2∼π(σ2),spacerαl∼π(αl),spacer(βl1,…,βlb,…,βlP)∼π(βl1,…,βlb,…,βlP),spacerωl2∼π(ωl2),spacerl=1,…,K.{\displaystyle {\begin{aligned}&\sigma ^{2}\sim \pi (\sigma ^{2}),\\{\phantom {spacer}}\\&\alpha _{l}\sim \pi (\alpha _{l}),\\{\phantom {spacer}}\\&(\beta _{l1},\ldots ,\beta _{lb},\ldots ,\beta _{lP})\sim \pi (\beta _{l1},\ldots ,\beta _{lb},\ldots ,\beta _{lP}),\\{\phantom {spacer}}\\&\omega _{l}^{2}\sim \pi (\omega _{l}^{2}),\\{\phantom {spacer}}\\&l=1,\ldots ,K.\end{aligned}}}
Here,yij{\displaystyle y_{ij}}denotes the continuous response of thei{\displaystyle i}-th subject at the time pointtij{\displaystyle t_{ij}}, andxib{\displaystyle x_{ib}}is theb{\displaystyle b}-th covariate of thei{\displaystyle i}-th subject. Parameters involved in the model are written in Greek letters.f(t;θ1,…,θK){\displaystyle f(t;\theta _{1},\ldots ,\theta _{K})}is a known function parameterized by theK{\displaystyle K}-dimensional vector(θ1,…,θK){\displaystyle (\theta _{1},\ldots ,\theta _{K})}. Typically,f{\displaystyle f}is a `nonlinear' function and describes the temporal trajectory of individuals. In the model,ϵij{\displaystyle \epsilon _{ij}}andηli{\displaystyle \eta _{li}}describe within-individual variability and between-individual variability, respectively. IfStage 3: Prioris not considered, then the model reduces to a frequentist nonlinear mixed-effect model.
A central task in the application of the Bayesian nonlinear mixed-effect models is to evaluate the posterior density:
π({θli}i=1,l=1N,K,σ2,{αl}l=1K,{βlb}l=1,b=1K,P,{ωl}l=1K|{yij}i=1,j=1N,Mi){\displaystyle \pi (\{\theta _{li}\}_{i=1,l=1}^{N,K},\sigma ^{2},\{\alpha _{l}\}_{l=1}^{K},\{\beta _{lb}\}_{l=1,b=1}^{K,P},\{\omega _{l}\}_{l=1}^{K}|\{y_{ij}\}_{i=1,j=1}^{N,M_{i}})}
∝π({yij}i=1,j=1N,Mi,{θli}i=1,l=1N,K,σ2,{αl}l=1K,{βlb}l=1,b=1K,P,{ωl}l=1K){\displaystyle \propto \pi (\{y_{ij}\}_{i=1,j=1}^{N,M_{i}},\{\theta _{li}\}_{i=1,l=1}^{N,K},\sigma ^{2},\{\alpha _{l}\}_{l=1}^{K},\{\beta _{lb}\}_{l=1,b=1}^{K,P},\{\omega _{l}\}_{l=1}^{K})}
=π({yij}i=1,j=1N,Mi|{θli}i=1,l=1N,K,σ2)}Stage 1: Individual-Level Modelspacer×π({θli}i=1,l=1N,K|{αl}l=1K,{βlb}l=1,b=1K,P,{ωl}l=1K)}Stage 2: Population Modelspacer×p(σ2,{αl}l=1K,{βlb}l=1,b=1K,P,{ωl}l=1K)}Stage 3: Prior{\displaystyle {\begin{aligned}=&~\left.{\pi (\{y_{ij}\}_{i=1,j=1}^{N,M_{i}}|\{\theta _{li}\}_{i=1,l=1}^{N,K},\sigma ^{2})}\right\}{\text{Stage 1: Individual-Level Model}}\\{\phantom {spacer}}\\\times &~\left.{\pi (\{\theta _{li}\}_{i=1,l=1}^{N,K}|\{\alpha _{l}\}_{l=1}^{K},\{\beta _{lb}\}_{l=1,b=1}^{K,P},\{\omega _{l}\}_{l=1}^{K})}\right\}{\text{Stage 2: Population Model}}\\{\phantom {spacer}}\\\times &~\left.{p(\sigma ^{2},\{\alpha _{l}\}_{l=1}^{K},\{\beta _{lb}\}_{l=1,b=1}^{K,P},\{\omega _{l}\}_{l=1}^{K})}\right\}{\text{Stage 3: Prior}}\end{aligned}}}
The panel on the right displays Bayesian research cycle using Bayesian nonlinear mixed-effects model.[19]A research cycle using the Bayesian nonlinear mixed-effects model comprises two steps: (a) standard research cycle and (b) Bayesian-specific workflow. Standard research cycle involves literature review, defining a problem and specifying the research question and hypothesis. Bayesian-specific workflow comprises three sub-steps: (b)–(i) formalizing prior distributions based on background knowledge and prior elicitation; (b)–(ii) determining the likelihood function based on a nonlinear functionf{\displaystyle f}; and (b)–(iii) making a posterior inference. The resulting posterior inference can be used to start a new research cycle.
|
https://en.wikipedia.org/wiki/Multilevel_model
|
Incombinatoricsandorder theory, amultitreemay describe either of two equivalent structures: adirected acyclic graph(DAG) in which there is at most one directed path between any twovertices, or equivalently in which thesubgraphreachable from any vertex induces anundirected tree, or apartially ordered set(poset) that does not have four itemsa,b,c, anddforming a diamond suborder witha≤b≤danda≤c≤dbut withbandcincomparable to each other (also called adiamond-free poset[1]).
Incomputational complexity theory, multitrees have also been calledstrongly unambiguous graphsormangroves; they can be used to modelnondeterministic algorithmsin which there is at most one computational path connecting any two states.[2]
Multitrees may be used to represent multiple overlappingtaxonomiesover the same ground set.[3]If afamily treemay contain multiple marriages from one family to another, but does not contain marriages between any two blood relatives, then it forms a multitree.[4]
In a directed acyclic graph, if there is at most one directed path between any two vertices, or equivalently if the subgraph reachable from any vertex induces an undirected tree, then itsreachabilityrelation is a diamond-free partial order. Conversely, in a diamond-free partial order, thetransitive reductionidentifies a directed acyclic graph in which the subgraph reachable from any vertex induces an undirected tree.
A diamond-freefamily of setsis a familyFof sets whose inclusion ordering forms a diamond-free poset. IfD(n) denotes the largest possible diamond-free family of subsets of ann-element set, then it is known that
and it is conjectured that the limit is 2.[1]
Apolytree, a directed acyclic graph formed byorientingthe edges of an undirected tree, is a special case of a multitree.
The subgraph reachable from any vertex in a multitree is anarborescencerooted in the vertex, that is a polytree in which all edges are oriented away from the root.
The word "multitree" has also been used to refer to aseries–parallel partial order,[5]or to other structures formed by combining multiple trees.
|
https://en.wikipedia.org/wiki/Multitree
|
Anordinary(fromLatinordinarius) is an officer of a church or civic authority who by reason of office hasordinary powerto execute laws.
Such officers are found in hierarchically organised churches ofWestern Christianitywhich have anecclesiastical legal system.[1]For example, diocesan bishops are ordinaries in theCatholic Church[1]and theChurch of England.[2]InEastern Christianity, a corresponding officer is called ahierarch[3](fromGreekἱεράρχηςhierarkhēs"president of sacred rites, high-priest"[4]which comes in turn from τὰ ἱεράta hiera, "the sacred rites" and ἄρχωarkhō, "I rule").[5]
Incanon law, the power to govern the church is divided into the power to make laws (legislative), enforce the laws (executive), and to judge based on the law (judicial).[6]An official exercises power to govern either because he holds an office to which the law grants governing power or because someone with governing power has delegated it to him. Ordinary power is the former, while the latter is delegated power.[7]The office with ordinary power could possess the governing power itself (proper ordinary power) or instead it could have the ordinary power of agency, the inherent power to exercise someone else's power (vicariousordinary power).[8]
The law vesting ordinary power could either be ecclesiastical law, i.e. the positive enactments that the church has established for itself, or divine law, i.e. the laws which were given to the Church by God.[9]As an example of divinely instituted ordinaries, whenJesusestablished the Church, he also established theepiscopateand theprimacy of Peter, endowing the offices with power to govern the Church.[10]Thus, in the Catholic Church, the office of successor of Simon Peter and the office of diocesan bishop possess their ordinary power even in the absence of positive enactments from the Church.
Many officers possess ordinary power but, due to their lack of ordinary executive power, are not called ordinaries. The best example of this phenomenon is the office ofjudicial vicar, a.k.a.officialis. The judicial vicar only has authority through his office to exercise the diocesan bishop's power to judge cases.[11]Though the vicar has vicarious ordinary judicial power, he is not an ordinary because he lacks ordinary executive power. Avicar general, however, has authority through his office to exercise the diocesan bishop's executive power.[12]He is therefore an ordinary because of this vicarious ordinary executive power.
Local ordinaries exercise ordinary power and are ordinaries inparticular churches.[13]The followingclericsare local ordinaries:
Also classified as local ordinaries, although they do not head a particular church or equivalent community are:
Major superiors ofreligious institutes(includingabbots) and ofsocieties of apostolic lifeare ordinaries of their respective memberships, but not local ordinaries.[20]
In theEastern Orthodox Church, a hierarch (ruling bishop) holds uncontested authority within the boundaries of his own diocese; no other bishop may perform anysacerdotalfunctions without the ruling bishop's express invitation. The violation of this rule is calledeispēdēsis(Greek: εἰσπήδησις, "trespassing", literally "jumping in"), and is uncanonical. Ultimately, all bishops in the Church are equal, regardless of any title they may enjoy (Patriarch,Metropolitan,Archbishop, etc.). The role of the bishop in the Orthodox Church is both hierarchical and sacramental.[21]
This pattern of governance dates back to the earliest centuries of Christianity, as witnessed by the writings ofIgnatius of Antioch(c.100 AD):
The bishop in each Church presides in the place of God.... Let no one do any of the things which concern the Church without the bishop.... Wherever the bishop appears, there let the people be, just as wherever Jesus Christ is, there is theCatholic Church.
And it is the bishop's primary and distinctive task to celebrate theEucharist, "the medicine of immortality."[21][22]SaintCyprian of Carthage(258 AD) wrote:
The episcopate is a single whole, in which each bishop enjoys full possession. So is the Church a single whole, though it spreads far and wide into a multitude of churches and its fertility increases.[23]
Bishop Kallistos (Ware)wrote:
There are many churches, but only One Church; manyepiscopibut only one episcopate."[24]
InEastern Orthodox Christianity, the church is not seen as a monolithic, centralized institution, but rather as existing in its fullness in each local body. The church is defined Eucharistically:
in each particular community gathered around its bishop; and at every local celebration of the Eucharist it is thewholeChrist who is present, not just a part of Him. Therefore, each local community, as it celebrates the Eucharist ... is the church in its fullness."[21]
An Eastern Orthodox bishop's authority comes from his election andconsecration. He is, however, subject to theSacred Canonsof the Eastern Orthodox Church, and answers to theSynod of Bishopsto which he belongs. In case an Orthodox bishop is overruled by his local synod, he retains the right ofappeal(Greek: Ἔκκλητον,Ékklēton) to his ecclesiastical superior (e.g. a Patriarch) and his synod.
|
https://en.wikipedia.org/wiki/Ordinary_(officer)
|
Major recurring characters of theHalomultimedia franchise are organized below by their respective affiliations within the series' fictional universe. The franchise's central story revolves around conflict between humanity under the auspices of theUnited Nations Space Commandor UNSC, and an alien alliance known as theCovenant. The artifacts left behind by an ancient race known as theForerunnerplay a central role—particularly theringworldsknown asHalos, built to contain the threat of the parasiticFlood.
The characters underwent major changes over the course of the firstHalogame's development, and were continually refined or changed with the advance of graphics and animation technologies.Halo's commercial and critical success has led to large amounts of merchandise featuring the franchise's characters to be produced. The Master Chief, the most visible symbol of the series, has been heavily marketed, with the character's visage appearing on soda bottles, T-shirts, andXboxcontrollers. Other merchandise produced includes several sets ofaction figures. The franchise's characters have received varying reception, with some praised as among the best in gaming, while others have been called cliched or boring.
TheHalofranchise originated with the 2001 video gameHalo: Combat Evolved. The game's characters were continually refined through development, as developerBungiewas bought byMicrosoftand the platform shifted from theMacintoshto theXbox. Other Bungie developers would often add input to character development, even if they were not working on the game itself.[1]: 19An outside artist, Shi Kai Wang, developed the early concept sketches of what would eventually become the Master Chief. However upon developing a 3D model, the artists decided the Chief looked too slender, almost effeminate, and subsequently bulked up the character.[1]: 20Early Covenant Elites had a more natural jaw rather than the split mandibles they would later sport; at one point, Jason Jones was also insistent about having a tail on the Elites, but this idea was eventually dropped.[1]: 38
Originally, the game designers decided to hand-key character animations.[1]: 14The animators videotaped themselves to have reference footage for the movement of game characters; art director Marcus Lehto's wife recorded him "running around a field with a two-by-four" for the human marines. ByHalo 3, Bungie staff had a special room designed for capturing reference material.[2]Many of the subsequent human character's features were based on Bungie designers,[1]: 27while character animators looked to simian, ursine, insectoid, and reptilian features for the various races of the Covenant.[1]: 53The artificial intelligences of the characters was also deliberately limited to make sure they acted realistically to environmental changes and situations.[3]Later games usemotion captureto capture the movement and facial acting of the cast.
TheHaloseries features voice work by television and film actors includingRon Perlman,Orlando Jones,Michelle Rodriguez,Robert Davi, andTerence Stamp.[4]Voice acting became more important asHalo: Combat Evolved's sequels were developed;Halo 2had 2,000 lines of combat dialogue, whileHalo 3has in excess of 14,000 lines.[5]Some actors voiced their lines in remote locations, while others traveled to a studio to record their lines.[6]In interviews,Halo's voice actors stated that they had no idea that the games would become such a critical and commercial success.Steve Downes, the voice of the game's protagonist, stated that generally when a voice actor has finished their lines, their involvement with the game ends. As the characters inCombat Evolvedwere relatively undefined, the voice actors were given leeway to develop their own style and personality.[6]
Aside from major character roles, members of theHalocommunity andHalofans have had small roles in the games. The cast from themachinimaRed vs. Bluewon a lengthy charity auction for a voice role inHalo 3, and do a comedy routine which changes depending on the difficulty level the game is played on.[7]Cast members of the defunct TV showFirefly—Alan Tudyk,Nathan Fillion, andAdam Baldwin—have roles as marines inHalo 3[4]as well asHalo 3: ODST[8][9]andHalo 5: Guardians.
Master Chief Petty OfficerJohn-117, commonly referred to as simply theMaster Chief, is the main protagonist and main playable character in many of theHalogames. The character is voiced bySteve Downes, a Chicagodisc jockey. He is one of the Spartans, an elite group of augmented soldiers raised from childhood to be super soldiers. Assisted by the artificial intelligenceCortana, he prevents the catastrophic firing of the Halo Installation 04 inHalo: Combat Evolved. Developing the character of Master Chief was part of Bungie's efforts to make players invested in playing the game.[10]The character has since become a gaming icon, the mascot of the Xbox, and rated as one of the greatest characters in video games.[11][12][13][14]In live action, the Chief has been portrayed byDaniel CudmoreinHalo 4: Forward Unto DawnandPablo Schreiberin theParamount+ series.
Cortana, voiced in the games byJen Taylor, is theartificial intelligence(AI) who assists the Master Chief in the video games. She is one of many "smart" AIs, and is based on the brain ofDr. Catherine Halsey; the nature of her construction means she is subject to a finite lifespan. InHalo 4, Cortana begins to succumb to her age, and sacrifices herself to save Chief and Earth from the Forerunner Didact, butHalo 5: Guardiansreveals that she had survived the ordeal. Having found access to the Domain, a Forerunner repository of knowledge, Cortana believes that AIs should serve as the galaxy's caretakers, putting her in conflict with her creators.[15][16]InHalo Infinite, however, after Atriox seemingly defeats Chief which devastates her, Cortana finally destroys herself. Cortana has been called one of gaming's greatest characters,[17]and one of the "50 Greatest Female Characters"[18]and the heart of the franchise.[19][20]The character'ssex appealhas also been a focus on commentary.[21]
Avery Junior Johnsonis aMarinesergeant majorwho leads human forces throughout theHaloseries. The character is voiced by David Scully. Johnson and a few other Marines survive the destruction ofInstallation 04and are rescued byCortanaand theMaster Chiefduring the novelHalo: First Strike. Johnson plays a larger role inHalo 2, joining forces with theArbiterto stopTartarusfrom activatingInstallation 05. InHalo 3, the Covenant attempt to use him to activate the Halo Array, but are foiled; when the Master Chief decides to activate the local Halo to stop the Flood infestation, the Forerunner construct343 Guilty Sparkkills Johnson to prevent it.[22][23]: 72Johnson is featured inThe Halo Graphic Novelstory, "Breaking Quarantine," and a main character in the 2007 novelHalo: Contact Harvest.[24]Johnson is also featured in the real time strategy gameHalo Wars 2, as a playable leader for the UNSC.
In many ways similar to the stereotype of charismatic black Marines found in other science fiction (such asSergeant AponeinAlienswhom Johnson was partially based on),[24]some critics found Johnson aflat character. Joseph Staten admitted that Johnson was static inHalo: Combat Evolved, and that despite the character's potential, "he sort of inherited those caricature aspects [fromHalo]."[24]Contact Harvestwas a chance "to do right" by the character.[24]Luke Cuddy identified Johnson's character arc as closely tied to the series' themes of struggle and sacrifice.[25]He has been included in critic lists of the best black video game characters.[26]
CaptainJacob Keyes(voiced by Pete Stacker) is a captain in theUNSCwho appears inHalo: Reach,Halo: Combat Evolved,Halo: The Flood,Halo: The Cole Protocol, andHalo: The Fall of Reach. His first chronological appearance is inThe Fall of Reach, where, as a young Lieutenant, he accompanies Dr.Catherine Halseyon her mission to screen possibleSPARTAN-II Projectsubjects.[27]During battle with the Covenant over the planet Sigma Octanus IV, Keyes became known for his complicated and unorthodox maneuver which allowed him to win against impossible odds. Keyes leads his ship thePillar of AutumntoAlpha HaloinCombat Evolved. There, Keyes leads aguerrillainsurgency against the Covenant. Captured and assimilated by the parasiticFlood, he ismercifully killedby the Master Chief, who takes Keyes' neural implants to control the ship.[23]: 74Danny Sapaniportrays a markedly different iteration of Keyes inthe television series.
CommanderMiranda Keyesis the daughter ofJacob KeyesandCatherine Halsey. Miranda appears inHalo 2,Halo 3and in the final chapter ofHalo: The Cole Protocol. InHalo 2she supports the Master Chief in his battles, and assists Sergeant Major Johnson and theArbiterin stopping the activation of the Halo Array. InHalo 3, Keyes attempts a rescue of Johnson when he is captured by the Covenant to activate the Ark; she is killed by Truth in the attempt.[23]: 75Keyes was voiced byJulie BenzinHalo 2'. Benz said that she loved voiceover work and that it was pure chance she had become the voice of Keyes in the first place.[28]WhenIGNasked Benz what she thought of her character, she admitted she hadn't played the game.[28]The role was recast forHalo 3,[29]where Justis Bolding assumes the role.Olive Grayportrays a markedly different iteration of Miranda Keyes in the television series.
Dr. Catherine Elizabeth Halseyis a civilian scientist. She works with the military to run theSPARTAN-II Project, creating the most effective weapons humanity has against insurrection, and then in the war with the Covenant.[23]: 45Cortana is derived from Halsey's cloned brain.[23]: 78Jen Taylor, who also voicesCortana, provides the voice and motion capture performance, inHalo: Reach,Halo 4, andHalo 5: Guardians. The character is voiced byShelley Calene-BlackinHalo Legends.Natascha McElhoneportrays the character in the television series.
James Ackersonis a high-ranking UNSC Army colonel. He convinces the Office of Naval Intelligence to fund the SPARTAN-III Project, and vied with Halsey for funding. After an attempt to frustrate Halsey's Spartans, Cortana fakes orders to reassign Ackerson to the front lines of the war. On Mars, he is captured by the Covenant, and executed after leading them on a wild goose chase for a supposed Forerunner artifact on Earth;[23]: 73the ruse is used by Ackerson's brother Ruwan to help the UNSC strike a blow against the Covenant.[30]Joseph Morganportrays the character in the television series.
Senior Chief Petty OfficerFranklin Mendezis theSPARTAN-II's trainer. After teaching the first class of Spartans, Mendez sees additional action fighting the Covenant, before being recruited to train the SPARTAN-IIIs. Initially uncertain of the new Spartans' potential, Mendez trains several companies of the supersoliders. During the events ofGhosts of Onyxhe is sealed inside a shield world with Halsey and other Spartans, and after the Human-Covenant War, retires from service.[23]: 74[31]
Fleet AdmiralLord Terrence Hood(voiced byRon Perlman) is commander of the UNSC Home Fleet, as well an English noble. During the events ofHalo 2and3he defends Earth against the Covenant, staying behind when Master Chief and Arbiter depart for the Ark.[23]: 73InHalo 3's epilogue, he leads a memorial service for those who fell in the conflict. The short story "Rosbach's World" reveals Hood escapes Cortana's attack on Earth during the end ofHalo 5, ending up along with ONI chief Serin Osman on a safe world. Hood blames himself for the situation due to allowing the Master Chief free rein and falls into a depression.Keir Dulleaportrays Hood in the television series.
AdmiralSerin Osmanis a former Spartan-II who washed out of the program. Osman was hand-picked by Office of Naval Intelligence leader Admiral Margaret Parangosky to be her successor. After the Human-Covenant War Osman served as the leader of Kilo-5, a black ops team that worked to destabilize humanity's enemies to prevent future war. She later orders the assassination of Catherine Halsey, believing her usefulness is at an end.[32]Thanks to the loyalty of the artificial intelligence Black Box, Osman is evacuated from Earth before Cortana attacks. The character appears in the Kilo-5 Trilogy,Halo: Last Light,Halo: Fractures,Halo: Retribution,Halo 4 Spartan Ops, andHalo Fractures.
CaptainThomas Laskyis the captain of the UNSCInfinity. Portrayed byThom Green, he is a main character in the web seriesHalo 4: Forward Unto Dawn, which depicts his training at an officer academy and rescue by Master Chief as the Covenant invade and glass the planet. InHalo 4, Lasky (voiced by Darren O'Hare) serves asInfinity'sfirst officer and aids the Chief on the Forerunner world Requiem. WhenInfinity'scaptain refuses to listen to the Master Chief and Cortana and leaves Requiem, he is relieved of command and Lasky is promoted. He reappears inHalo 4 Spartan Opsand theHalo: Escalationcomic series. InHalo 5: Guardians, Lasky reluctantly sends Spartan Fireteam Osiris after the rogue Spartan Blue Team. When AIs begin pledging loyalty to Cortanaen masse, Lasky andInfinityare forced to flee Earth. InHalo: Bad Blood, Lasky andInfinitylink up with Blue Team and Fireteam Osiris soon after.
Roland(voiced by Brian T. Delaney) is the current artificial intelligence aboard the UNSC flagshipInfinity. Roland's avatar takes the form of a golden World War II fighter pilot. Unlike many human AI, Roland does not join Cortana and her Created, and continues to serve the UNSC. Roland appears inHalo 4 Spartan Ops,Halo 5,Halo: Spartan Assault, and spin-off media, including theHalo: Escalationcomic series,Halo: FracturesandHalo: Tales from Slipspace.
Building on the failed ORION Program,Catherine Halseyand the UNSC Office of Naval Intelligence developed the SPARTAN-II program to create an elite corps ofsupersoldiersthat could stem rebellions in the UNSC colonies.[33]The Spartan candidates were kidnapped as children and replaced by flash clones that quickly died afterwards. After grueling training, they were subject to dangerous physical augmentation, and equipped with "MJOLNIR"powered armor. Following in the wake of the SPARTAN-II Project were the SPARTAN-IIIs, children orphaned by the Covenant War who became cheaper, more expendable soldiers. After the war, the UNSC began training Spartan-IVs from adult volunteers. The existence of the Spartans was disclosed to the public to raise morale as the Covenant War continued to go badly.[33][34]
Fred-104is a Spartan-II and one of the Master Chief's closest friends. He is voiced by Andrew Lowe inHalo: Legends, portrayed by Tony Giroux inHalo 4: Forward Unto Dawn, and portrayed by Travis Willingham in theHalo 2: AnniversaryTerminals and inHalo 5: Guardians. Fred survives the fall of Reach, as shown inHalo: First Strike, and assists Master Chief and other Spartans in destroying a Covenant armada massing to attack Earth. InHalo: Ghosts of Onyx, Fred and Blue Team fight the Covenant on Onyx and end up within a Forerunner shield world. They reconnect with the outside world inHalo: Glasslands; though to them only a few days have passed, outside months have. In theHalo: Escalationcomic series, Fred and the other members of Blue Team are reunited with the Master Chief, and appears alongside the Master Chief, Kelly and Linda inHalo 5: Guardians.
Linda-058(voiced by Andrea Bogart in theHalo 2: Anniversaryterminals and Brittany Uomoleale inHalo 5: Guardians) is an excellent marksman. She appears in both much of the spin-off media andHalo 5: Guardians. After being mortally wounded inHalo: The Fall of Reach, she is placed into suspended animation. InHalo: First Strikeshe is revived and participates in action against the Covenant. InGhosts of Onyxshe and her fellow Spartans defend Earth from the Covenant, before being sent to the planet Onyx after receiving a message from Catherine Halsey. Following the end of the Human-Covenant war inHalo: Last Light, Linda and Blue Team investigate a Forerunner structure on the politically unstable planet of Gao and get caught in both the machinations of a power hungry leader and the plans of a rogue Forerunner AI, Intrepid Eye. In theHalo: Escalationcomic series, Linda and the other members of Blue Team are reunited with the Master Chief, and she fights with Blue Team during the events ofHalo 5. Linda is also the main character in theHalo: Lone Wolfcomic series.
Kelly-087, voiced byLuci ChristianinHalo: Legends, Jenna Berman inHalo 4: Forward Unto DawnandMichelle LukesinHalo 5: Guardians, is the Spartan-II's scout and the Master Chief's best friend. Kelly is noted for her incredible speed, even before augmentation. She is initially presumed lost with other Spartans dispatched to Reach during its invasion, but inHalo: First Strikeit is revealed she survived. During the events of the novel Halsey kidnaps Kelly and flees with her to Onyx. Kelly appears alongside the Master Chief, Linda and Fred inHalo 5: Guardians.
James-005is a Spartan-II, appearing inHalo: The Fall of ReachandHalo: Empty Throne.
InHalo: The Fall of Reach, James serves as the fourth member of Blue Team during several battles in the Covenant war, serving as the team's scout. During the Battle of Sigma Octanus IV, James loses his left arm from the elbow down to a blast from a Hunter, but survives and helps to crush the alien creature and its bond brother with a large quartz monolith. While escaping, James has to be carried out by the Master Chief after passing out from his injuries. Having recovered by the time of the Fall of Reach, James joins the Master Chief and Linda on a mission to a space station under heavy Covenant attack to secure a navigation database. However, James' thruster pack is hit by enemy weapons fire, sending him tumbling away into space uncontrollably. After returning to thePillar of Autumn, the Master Chief has the ship scan for James, but there's no sign of him, leaving everyone to presume that he had died, although James is officially listed as missing in action.
InHalo: Empty Throne, James returns, now going by the name James Solomon. It's revealed that a patrol tug eventually recovered James' body after Reach's destruction and brought him to a secret ONI facility where James was revived. However, it took over a year for James to recover and by the time that he did, the war was already over. James has since become a private military contractor working for ONI and doing the jobs that ONI both didn't want to do and couldn't do because of the potential fallout, missions so hazardous and politically risky that ONI had to maintain complete deniability. Over time, James becomes disillusioned with his work and dreams of retiring to a small uncharted habitable moon that he had discovered which James names Suntéreó after the ancient Greek word for preserving something by keeping it close. However, James realizes that all he knows is war and he doesn't want to retire. In late 2559, ONI sends James after Chloe Hall, a young clone of Dr. Halsey who holds the key to controlling the Domain, causing several factions to chase after her. Bonding with the girl, James is fatally injured rescuing her from the Banished. Encountering the pair, James' old friend Adriana-111 agrees to report that they were both killed by the Banished. James helps Chloe escape on his ship, leaving her with his armor's helmet as a memento and his AI Lola to look after her. Surrounded by Banished, James detonates his armor's self-destruct in order to cover Chloe's escape. Following James' wishes, Lola takes the girl to Suntéreó where she will be safe from everyone chasing after her.
CommanderSarah Palmer(Jennifer Hale) is a Spartan-IV stationed on UNSCInfinityand the leader of the Spartan IVs. She appears inHalo 4,Halo 5: Guardians,Halo: Spartan Assault, "Halo: Shadows of Reach", and theHalo Escalationcomic series.
Gunnery SergeantEdward Malcolm Buck(Nathan Fillion) is a longtime human soldier. InHalo 3: ODSThe is the leader of Alpha-Nine, a squad of Orbital Drop Shock Troopers (ODSTs). He is subsequently inducted into the SPARTAN-IV program, and is a playable member of Fireteam Osiris in the video gameHalo 5. He makes a brief appearance inHalo: Reachand is the main character of the novelsHalo: New BloodandHalo: Bad Blood. In the latter, Buck reunites his old squad Alpha-Nine (minus "Rookie", who died during a previous mission in the timeskip between3and4) for a classified ONI mission following the events ofHalo 5. At the end of the novel, Buck decides to return to leading Alpha-Nine full-time and is married to long-time girlfriend Veronica Dare byInfinity'sAIRoland.
SPARTAN-B312, also known by his call-signNoble Six, is a Spartan III who is the main protagonist ofHalo: Reach. B312 is the latest addition to Noble Team, a fireteam of fellow Spartan III's and one Spartan II that is stationed on Reach just prior to the events ofHalo: Combat Evolved.[35]His identity and background are highly classified due to his prior work with the ONI, and as he is transferred to Noble Team, the Covenant invades the planet. A solitary figure who prefers working alone and gains the nickname "Lone Wolf" as a result, Six earns the respect of his teammates despite their early resentment of him and plays a role in transferring crucial data to Cortana before she travels to Installation 04 on thePillar of Autumn, also ensuring the ship's safe departure from Reach. However, most members of Noble Team, including Six, perish during the planet's fall to the Covenant and glassing. While characterized as a male in canon,[36]the player can opt to characterize Noble Six as either male or female prior to playing the campaign ofHalo: Reach. Six's male and female incarnations are voiced byPhilip Anthony-Rodriguezand Amanda Philipson, respectively.[37]
Jameson Lockeis a Spartan IV who first appeared inHalo 2 Anniversary's both opening and ending with the task of hunting down the Master Chief inHalo 5: Guardians.Mike Colterportrays Locke in both Anniversary and theNightfallorigin movie, and only provided themotion-captureperformance for the character inGuardians. Due to scheduling conflicts withJessica JonesandLuke Cage, Locke's voice acting is replaced byIke Amadi. He is the current squad leader of Fireteam Osiris, tasked with hunting down Master Chief and Blue Team. Locke killsJul 'Mdamain single combat and helps the Arbiter defeat the last of the Jul's Covenant forces. InHalo Infinite, a Brute Chieftain on Zeta Halo named Hyperius can be seen wearing Locke's helmet and chest armor on his shoulder as a trophy. It remains unknown what Locke's current status is or whether he survived the encounter with Hyperius.
TheOrbital Drop Shock Troopersare an elite special ops component of the UNSC Marine Corps, distinguished by their unique deployment from space onto planetary surfaces through entry-vehicles nicknamed "drop pods", similar toparatroopers.[38]A battalion of ODSTs, codenamed Alpha-Nine, are the primary focus ofHalo 3: ODST, with Edward Buck serving as the squad leader and the player assuming the role of "The Rookie" on the squad. Several ODSTs, including Antonio Silva, have a disdain for the Spartan IIs and their abilities, likely stemming from an incident when a 14-year-old John-117 accidentally killed two ODSTs while defending himself from being bullied.[39]
High Prophets, or Hierarchs, are the supreme leaders of the theocraticCovenant. Upon assuming office, each Hierarch picks a newregnal namefrom a list of names of former Hierarchs, similar to the practice of someOrthodoxPatriarchs.[40]InHalo 2, there are shown to be only three; the Prophets of Truth, Mercy, and Regret (voiced byMichael Wincott,Hamilton CampandRobin Atkin DownesinHalo 2, respectively; inHalo 3, Truth is voiced byTerence Stamp). The novelHalo: Contact Harvestreveals that these three Prophets, originally known as the Minister of Fortitude, the Vice-Minister of Tranquility, and the Philologist,[41]plotted to usurp the throne of the Hierarchs; in the process, they hide the truth that humanity is descended from the Covenant gods, the Forerunners, believing that the revelation could shatter the Covenant. During the course ofHalo 2, Regret attacks Earth, and then retreats to Delta Halo. There, he calls for reinforcements, but is killed by the Master Chief. Later, Mercy is attacked by theFloodonHigh Charity; Truth could have saved him, but left him to die so he could have full control over the Covenant. InHalo 3: ODST, Truth is seen inspecting some Engineers around the Forerunner construct near New Mombasa. InHalo 3, Truth also meets his demise at the hands of theArbiterwhen the Prophet attempts to activate all the Halo rings from theArk. His death becomes the culmination of the Covenant's downfall.
Preliminary designs for the Prophets, including the Hierarchs, were done by artist Shi Kai Wang. According toThe Art of Halo, the Prophets were designed to look feeble, yet sinister.[1]: 55Originally, the Prophets appeared to be fused to the special hovering thrones they use for transport; even in the final designs, the Prophets are made to be dependent on their technology. Special headdresses, stylized differently for each of the Hierarchs, adds personality to the aliens and a regal presence.[1]: 55–56
The Arbiter is a rank given to special Covenant Elite soldiers who undertake suicidal missions on behalf of theHierarchsto gain honor upon their death. They are revered amongst the Covenant for their bravery and skills. InHalo 2, the rank of Arbiter is given to Thel 'Vadamee, the disgraced former Supreme Commander of the Fleet of Particular Justice, which was responsible for destroying Reach. It was under his watch that Installation 04(Alpha Halo) was destroyed inHalo: Combat Evolvedand theAscendant Justicewas captured by theMaster ChiefinHalo: First Strike. Rather than killing him, the Prophets allow the Commander to become the Arbiter, and to carry on his missions as the "Blade of the Prophets".[42]Eventually, the Arbiter rebels against the Prophets during the Great Schism by dropping the "-ee" suffix from his surname as a symbol of his resignation from the Covenant, and joins his fellow Elites in siding with humanity and stopping the Halo array from firing. Some of his backstory is featured inHalo: The Cole Protocolset about fifteen years beforeCombat Evolvedwhere the Arbiter, then Shipmaster Thel 'Vadamee, comes into conflict with UNSC forces led by then-LieutenantJacob Keyes. The events sow the seeds of doubt in the future Arbiter's mind about the Prophets and their plans. This particular Arbiter is voiced byKeith David; the Arbiter that appears inHalo Warsis voiced byDavid Sobolov.
Originally to be named "Dervish,"[43]the Arbiter was a playable character intended to be a major plot twist.[44]Reception to the character was lukewarm, with critics alternatively praising the added dimension brought by the Arbiter, or criticizing the sudden twist.[45][46]
Making his debut inHalo 2,Special OpsCommander Rtas 'Vadum is never named in the game itself, leading to the unofficial nickname of "Half-Jaw" by fans,[47]due to the missing mandibles on the left side of his face. With the release ofThe Halo Graphic Novel, however, the character was finally named in the storyLast Voyage of the Infinite Succoras Rtas 'Vadumee. The character is voiced byRobert Davi.
'Vadum, originally 'Vadumee before the Covenant Civil War, is a veteran Covenant Elite and the second most prominent Elite character in the series after the Arbiter. He carries the Covenant rank of Shipmaster.The Last Voyage of the Infinite Succorexplains how he loses his left mandibles; he is injured after fighting one of his friends, who was infected by theFlood.[48]During the early events ofHalo 2, 'Vadumee serves as a messenger between the Hierarchs and the Elite Council, as he is seen relaying messages between the two parties in the Prophets' chamber.[49]Surviving the Prophets' betrayal, 'Vadumee joins his brethren in fighting the Brutes, dropping the "-ee" suffix from his surname to symbolize his resignation from the Covenant. 'Vadum aids the Arbiter in attacking a Brute base to capture a Scarab before departing to take control of a nearby Covenant ship.
InHalo 3, 'Vadum is Shipmaster of the Swords of Sanghelios flagshipShadow of Intent, and supportsCortana's plan to follow Truth to the Ark. Along with the Arbiter, 'Vadum leaves Earth to return to the Elite's homeworld with the end of the war. Rtas 'Vadum is known for being a quick, smart, and ingenious tactician and an unparalleled fighter, especially with an Energy Sword and is an excellent leader. He expresses great care for his soldiers. He is eager to exact revenge on the Brutes after the Great Schism.
'Vadum appears in the novellaHalo: Shadow of Intenttaking place after the war. Still the Shipmaster of theShadow of Intent, 'Vadum protects Sangheili space and comes into conflict with a Covenant splinter faction led by two surviving Prophets, Prelate Tem'Bhetek and the Minister of Preparation Boru'a'Neem. The Prelate is shown to have a personal grudge against 'Vadum, blaming him for the death of his family when High Charity fell to the Flood and 'Vadum had the city partially glassed in a failed effort to contain the Flood. After capturing the Prelate, 'Vadum shows sympathy for him and reveals that the Prelate's family may well have been alive when the Prelate departed the city, meaning that Preparation lied to him. 'Vadum's words shake the Prelate's faith in Preparation who is revealed to be planning to use a prototype Halo ring to destroy Sanghelios using theShadow of Intentto power it. The Prelate sacrifices himself to stop Preparation, leaving 'Vadum with a new outlook following the encounter. Along with getting the Arbiter to relax age-old rules not allowing females to serve in the military, 'Vadum reveals that he plans to use navigation data recovered from the Prelate's ship to seek out the rest of the Prophets and attempt to determine who should be punished aswar criminalsand who should be pardoned to coexist in peace as innocents.
Tartarus (voiced byKevin Michael Richardson) is the Chieftain of the Brutes, easily recognized by his white hair, distinctivemohawk, and massive gravity hammer known as the "Fist of Rukt". Rough, arrogant, and disdainful of the Elites, Tartarus is completely dedicated to the Prophets' salvific "Great Journey".Halo: Contact Harvestreveals that Tartarus became Chieftain after killing the former Chieftain, his uncle Maccabeus, and seizing the Chieftain's weapon. InContact Harvest, Tartarus acts as one of the main antagonists, working to destroy the human colony of Harvest and coming into conflict with Sergeant Johnson. During the final battle of the novel, Johnson's life is inadvertently saved when one of Tartarus' own soldiers turns against him, damaging Tartarus' armor and forcing him to retreat. Tartarus makes his first appearance in the novelHalo: First Strike, as one of the first Brutes allowed into the chamber of theHigh Prophet of Truth.[50]InHalo 2, Tartarus acts as an agent of the Prophets, branding theArbiterfor his failures. The Chieftain later appears when the Arbiter tries to retrieve the Activation Index ofDelta Halo. On the Prophets' orders, Tartarus takes the Index and pushes the Arbiter to what was intended to be his death in a deep chasm.[51]Tartarus heads to the control room of Halo with the Index in order to activate Halo, but is confronted by the Arbiter. Blind to the Prophets' deception about the Great Journey, Tartarus activates the ring; the Brute is ultimately killed by the coordinated efforts of the Arbiter with the help of Sergeant Major Johnson, successfully preventing the firing ofDelta Halo.
Designs for Tartarus began after the basic shape and design of the common Brutes was complete.[52]Artist Shi Kai Wang added small but distinctive changes to Tartarus' armor and mane in order to distinguish the Chieftain from the other Brutes.[53]The visual design of the Chieftains was later modified forHalo 3, with the seasoned warriors sporting more elaborate headdresses and shoulder pads.[2]In a review of the character,UGO Networksnoted that whereas the Elites "are a precisionscalpel," Tartarus was a "baseball bat" that smashes everything in its path, a reference to their ceremonial weapons, the Energy Sword and Gravity Hammer, respectively.[54]
Jul 'Mdama, voiced by Travis Willingham, is the Supreme Leader of a newly formed Covenant splinter faction following the defeat of the Covenant Empire inHalo 3. Calling himself "the Didact's Hand," Jul's faction initially seeks the Forerunner warrior Didact as an ally against humanity. The character appears in Karen Traviss' Kilo-5 trilogy of novels, as well asHalo 4andHalo 5: Guardians. It was revealed inHalo: Escalationthat Jul 'Mdama's faction was only one of many factions self-proclaiming to be a new "Covenant".[55]
First appearing in theKilo-Five trilogy, Jul is depicted as a member of the Servants of the Abiding Truth, a religious Covenant splinter faction that is opposed to theArbiterand his emerging Swords of Sanghelios government. The Servants attempt to defeat the Arbiter ended in catastrophe thanks to the intervention of the UNSCInfinityin the battle. Jul was subsequently captured by the Kilo-Five black ops team and imprisoned on the Forerunner shield world of Onyx. Jul eventually escaped using one of the shield world's slipspace portals and traveled to the Sangheili colony world of Hesduros. By portraying his experiences on the shield world in a religious light, Jul was able to win over the inhabitants, but learned that his wife had been killed. Grief-stricken and blaming humanity, Jul discovered the coordinates to the shield world of Requiem on Hesduros and began building up a massive following, forming a new Sangheili-led Covenant. Jul's Covenant eventually found Requiem, but were trapped outside for three years as depicted in theHalo 4Terminals because the planet required the presence of a Reclaimer(human) to open.
InHalo 4, the Master Chief arrives at Requiem in the rear half of theForward Unto Dawnand comes into conflict with Jul and his forces. The Master Chief's presence causes Requiem to finally open, granting Jul's Covenant access to the planet. Jul eventually leads some of his forces into Requiem's core where the Forerunner known as the Didact is imprisoned. The Didact is able to trick the Master Chief into releasing him and Jul bows down before him. Despite the core's subsequent collapse, Jul manages to escape with his life and allies himself with the Didact against the humans from theInfinity. Subsequent to this, Jul brands himself "the Didact's Hand" with his status and ability to control the Prometheans giving him even more power and attracting more followers to his cause.
InSpartan Ops, six months after the Battle of Requiem, theInfinityreturns to Requiem which is still occupied by Jul and his Covenant. The forces of theInfinityand Jul's forces battle each other for control over the planet while Jul personally leads the attempt to access theLibrarian's AI which Jul wants to use for the power that the Librarian can give to him. InHalo: Escalation, Jul and Halsey work together while Jul faces a mutiny inside of his own forces. Their mission to access the Absolute Record of Forerunner Installations, however, fails.
InHalo 5: Guardians, Jul's power has begun to break following all of his defeats and his Prometheans turning against him under the influence ofCortana. On the remote world of Kamchatka, Jul attempts to access the Forerunner Domain with the help of Halsey while his loyal forces battle the Prometheans. However, Jul is unaware that Halsey has betrayed his location to the UNSC due to the threat Cortana poses. He is killed by Spartan Jameson Locke in single combat and Jul's Covenant falls apart soon thereafter.
InHalo: Legacy of Onyx, Jul's two sons are left on the opposing sides of an ongoing conflict on the Onyx shield world. Dural, now the leader of the Servants of the Abiding Truth, believes the Covenant to truly be gone with the death of his father and the destruction of his faction despite the existence of other ex-Covenant splinter factions in the galaxy.
Sali 'Nyon is the leader of a newly formed Covenant splinter faction following the defeat of the Covenant Empire inHalo 3. Originally a member of Jul 'Mdama's faction, 'Nyon eventually rebels against Jul, claiming to be the true "Didact's Hand."
InHalo: Escalationcomic series, 'Nyon's increasing questioning of Jul's leadership and alliance with Dr. Catherine Halsey leads to him breaking away from Jul's faction during the Battle of Aktis IV. Seizing the UNSC's half of the Janus Key, 'Nyon declares himself to be the true Didact's Hand and then broadcast a message across Jul's fleet, inciting them to rebellion. However, one of 'Nyon's men stole the Janus Key and betrayed him to Jul's forces. Most if not all of 'Nyon's forces were killed, and 'Nyon himself was imprisoned by Jul aboard his flagship, theSong of Retribution, before being transported to the damagedBreath of Annihilation. During the events at the Absolute Record, 'Nyon is released by ONI operative Ayit 'Sevi as a distraction, allowing 'Nyon to lead a rebellion aboard the ship and seize the assault carrier and its collection of Forerunner artifacts for himself. 'Nyon then escapes into Sangheili space to rally support for his faction, briefly engaging Jul who is unable to retake theBreath of Annihilation.
'Nyon is mentioned inHalo 5following Jul's death. Some Sangheili, including some of the Arbiter's own men, are shown to view him as a true leader in comparison to the Arbiter or Jul and the next best chance for a new Covenant.
InHalo: Empty Throne, 'Nyon has formed an alliance with Dovo Nesto, a surviving Prophet who seeks to resurrect the Covenant Empire. Nesto claims to one of his followers that the threat of Cortana caused 'Nyon to side with him, but in reality, Nesto has promised 'Nyon a place at his side in the restored Covenant as the head of the military. By this point, 'Nyon has amassed a significant following and plays a key role in Nesto's plans to seize control of the Domain for himself. However, this puts 'Nyon at odds with Severan, the son of Tartarus and a Banished War Chief who was made the same promise by Nesto, leading to a massive three-way battle between the UNSC, the Banished, and 'Nyon's faction. In a final duel, 'Nyon is defeated and decapitated by Severan who is prevented from doing the same to Nesto by a Swords of Sanghelios traitor. With 'Nyon dead, Nesto takes control of 'Nyon's faction and its considerable resources to continue his efforts to rebuild the Covenant.
A Sangheili mercenary who secretly works as an ONI operative, serving as a spy amongst the various Covenant remnant factions. Admiral Serin Osman explains the choice of 'Sevi as being because unlike a typical Sangheili who focus on honor and overt and direct aggression, 'Sevi is a deviant who focuses more on self-preservation and deception. In essence, ONI had chosen the most human Sangheili that they could find.
In theHalo: Escalationcomic series, 'Sevi is first seen retrieving abioweaponfrom a Kig-Yar pirate faction with his presence inciting the UNSCInfinityto get involved in the conflict after Osman lies that 'Sevi is an operative of Jul 'Mdama's Covenant faction. 'Sevi comes into conflict with Spartans Gabriel Thorne and Naiya Ray as they work to retrieve the bioweapon. After theInfinitydestroys the pirate base using nuclear weapons, 'Sevi is extracted by ONI, revealing him to be their agent. Captain Thomas Lasky later realizes that ONI had staged the incident in order to force theInfinityto act against the pirates, a threat which the UNSC had been ignoring. 'Sevi later plays a pivotal role in the battle for the Absolute Record, helping Thorne, Sarah Palmer, Holly Tanaka and Dr. Henry Glassman to infiltrate one of Jul's assault carriers and then later releasing Sali 'Nyon as a distraction, leading to infighting amongst the Covenant forces. After the mission, 'Sevi hides the team on the carrier until he is able to help them escape to a rendezvous with an ONI Prowler.
InHalo: Empty Throne, 'Sevi is present when Prophet Dovo Nesto takes control of 'Nyon's faction to continue his mission of rebuilding the Covenant Empire. Recognized as the one who had released 'Nyon from his imprisonment, allowing 'Nyon to build up a significant force, 'Sevi is trusted by Nesto who allows him to hear Nesto discussing his next plans. 'Sevi later transmits a report on these developments to Admiral Jilan al-Cygni who plans to rendezvous with him.
Nizat 'Kvarosee is a former Covenant Fleetmaster appearing inHalo: Silent Storm,Halo: OblivionandHalo: Outcasts. Cunning and intelligent, Nizat was one of humanity's first Covenant enemies before going rogue in his efforts to destroy humanity.
InHalo: Silent Storm, taking place near the start of the war between humanity and the Covenant, Nizat is the Fleetmaster of the Covenant Fleet of Inexorable Obedience. In this role, Nizat destroys a number of human colonies and clashes with the Spartans under the leadership of John-117 during Operation: SILENT STORM where the Spartans try to buy humanity time to adapt to their new enemy by striking at the Covenant behind enemy lines using board-and-destroy tactics. Human insurrectionists, hoping to use the Covenant to destroy the UNSC, provide Nizat and his forces with information on the Spartans, their armor, and their tactics, allowing Nizat to adapt his own strategies to match. Despite this, the UNSC successfully launches a massive attack on the Covenant world of Zhoist, destroying two cities, an important fleet support station, kill the special forces sent after them and decimates Nizat's fleet. Throughout these events, Nizat grows more and more annoyed with the Minor Minister of Artifact Survey who is assigned to oversee his fleet, but Survey acts more and more irrational as time goes on, causing Nizat to consider killing him several times despite the almost certain death sentence that it would bring upon himself. After the Battle of Zhoist, Nizat's steward Tam 'Lakosee kills Survey for his irrational and disrespectful behavior, but Nizat decides to cover it up in order to protect his subordinate.
InHalo: Oblivion, taking place a few months afterHalo: Silent Storm, Nizat has gone rogue from the Covenant following the decimation of his fleet by the Spartans. Recalled to High Charity by the Prophets, Nizat had insisted that ONI is the true danger and they needed to target and destroy it, but he is blamed for the losses suffered by the Covenant and sentenced to execution, forcing Nizat to flee with his loyal followers. Assembling a small fleet, Nizat enacts an elaborate plan to plant a beacon on ONI that will lead his forces to it for destruction by using a Covenant frigate as bait on the barely inhabitable world of Netherop, the site of one of the earliest skirmishes in Operation: SILENT STORM. Nizat successfully lures the Spartans to Netherop and plants the beacon on a corpse that they take back for study, evading detection as the true mastermind of the plot. However, the Covenant retaliates for Nizat's actions, decimating his fleet and exiling Nizat on Netherop alongside his surviving ground forces, creating an orbital mine shell in order to prevent anyone from ever coming back to the planet again, especially any of Nizat's surviving commanders looking to rescue him. Lieutenant Commander Amalea Petrov and several other UNSC soldiers are stranded as well, but Nizat remains certain that his surviving forces will carry out his plan in Nizat's absence.
InHalo: Outcasts, the UNSC returns to Netherop alongside the Swords of Sanghelios in search of a weapon capable of destroying Cortana's Guardians. After dismantling the orbital mine shell, the Arbiter discovers that Nizat and Petrov are both still alive and waging war against each other thirty-three years after they were stranded on the planet. However, Nizat has control of the weapon that both sides seek, dubbed the Divine Hand, which proves to be a Precursor weapon used by Precursor fugitives during the war of extermination that the Forerunners had waged against their creators. A side effect of using the weapon was the transformation of Netherop from a lush world into a barren wasteland. Four of Nizat's soldiers are killed by Spartan Olympia Vale and the Sangheili while another one defects and Nizat and Tam are captured. With Nizat and Tam proving to be fanatical beyond reason, Nizat in particular, the Arbiter decides to maroon them once again on Netherop when the UNSC forces, now including a rescued Petrov and her people, and the Swords of Sanghelios leave. Already severely wounded and weakened, Nizat develops an infection and he and Tam are left facing an inevitable death. So as to spare his friend from that fate, Tam chooses to mercy kill Nizat with his energy sword.
Atriox, voiced byJohn DiMaggioinHalo Wars 2andIke AmadiinHalo Infinite, is the Brute who founded and leads the mercenary organization known as theBanished. Having fought for the Covenant during the Human-Covenant War, Atriox grew disgruntled with the alien empire as his Brute brothers were carelessly used as cannon fodder by their Elite masters, a species who had a feuding rivalry with them due to the Brute's strength and aggression challenging their superior status. He also appears in theHaloTV seriesas a member of the Covenant.
InHalo: Rise of Atriox, he is shown as a Covenant soldier fighting against the UNSC Marines during the war. Atriox chases down a Marine and expresses to him how meaningless he finds the war against the human species to be as his brothers die. With no hatred for humanity,[56]Atriox quickly kills the Marine to complete his mission. Another Brute then reveals that he was spying on Atriox and declares him a heretic for renouncing the Covenant, resulting in Atriox killing him too. Having murdered one of his own and spoken against the Covenant religion, Atriox is sent to be executed by his Elite superiors. Atriox rebels against his punishment, killing the Elite executioner, inspiring a Brute named Decimus and others to overthrow the other Elites in the area. Atriox forms the Banished with them, and leaves the Covenant.
Atriox is shown recruiting more members to the Banished inHunting PartyfromHalo: Tales From Slipspace. He is willing to hire from all species, including Elites and humans.[57]Following the Great Schism, a civil war that tore the Covenant apart, a squad of Elite assassins known as the Silent Shadow embarked on a genocidal campaign against the Brutes for revenge. The Silent Shadow squad kills the Brute crew on Atriox's ship until they encounter him. Atriox tells the Elites that he and his Brutes were not responsible for the Great Schism and that they also hate the Covenant. The leader of the squad still expresses hatred to Atriox for being a Brute, to which Atriox responds that "vengeance is petty" and that "vengeance has no reward". The Silent Shadow squad reluctantly kill their leader, and join the Banished as mercenaries.
Halo: The Official Spartan Field Manualfurther details Atriox's openness to recruiting humans, as well as how his Brutes Chieftains have spread their influence into Brute colonies as well as human criminal enterprises.[57]InHalo: Divine Wind, several Banished humans are mentioned to have accompanied Atriox on his voyage to the Forerunner world known as the Ark.[58]
InHalo Wars 2, Atriox, who had taken over the Ark, reveals himself to Spartan-II Red Team of the UNSC ShipSpirit of Firewho had just arrived. Atriox attacks the Spartans and lets them escape, sending his Banished forces to chase after them. A prolonged battle for territory ensues between the crew of theSpirit of Fireand Atriox's forces. Having lost Decimus and his flagship in the battle, Atriox expresses respect to his enemy for their tenacity, and offers them a chance to leave peacefully rather than be hunted down. Captain Cutter of theSpirit of Firerefuses and successfully captures a Halo ring, recently created by the Ark, from the Banished.
InHalo: Shadows of Reach, Atriox manages to make contact with his forces in the Milky Way, instructing them to find a Forerunner slipspace portal on the former human colony of Reach. After the portal is activated by the Keepers of the One Freedom – another former Covenant faction allied with the Banished – with the help of their human acolytes, Atriox is able to use the shards of the Forerunner crystal recovered inHalo: First Striketo connect the portal to the Ark and fly through it to Reach in a Banished Lich. Rather than using the portal to bring more reinforcements back to the Ark, Atriox departs to attend to a greater purpose, leaving behind his troops on the Ark to hold it in his absence. The fanatically religious Keepers steal Atriox's Lich to travel to the Ark and fire the Halos while Veta Lopis – undercover amongst the Keepers – passes a warning to the UNSC that Atriox has returned.
InHalo: Outcasts, Atriox seeks a weapon on Netherop that he had learned about on the Ark capable of destroying Cortana's Guardians. Atriox races the UNSC, the Swords of Sanghelios and the Created to the weapon. Deeming the weapon to be too dangerous for either of their people to ever use, the Arbiter and Olympia Vale feign surrender and hand it over to Atriox in the hopes that the Banished will destroy themselves with it if they can figure out how to get it working.
InHalo Infinite, Atriox faces the Master Chief in battle on board the UNSCInfinityand defeats him, ultimately throwing the Spartan off of the ship. When the Master Chief is rescued six months later, he discovers that Atriox and his forces have destroyed theInfinityand nearly wiped out all of the UNSC forces on Installation 07. However, Atriox himself is believed to be dead, having apparently been killed when Cortana destroyed a section of the ring in order to stop him from using it. Through echoes of Cortana's memories, the Master Chief learns that the AI had approached Atriox as a representative of his species and destroyed his homeworld when he refused to surrender to her, provoking the Banished leader to seek out a UNSC AI known as the Weapon in order to defeat Cortana. After seeing the consequences of her actions, Cortana sacrificed herself to make things right by stopping the remorseless Atriox. However, Atriox is revealed to have secretly survived and he locates the Endless, a threat imprisoned by the Forerunners long ago on the ring.
In theHaloTV series, a Brute highly speculated by fans to be Atriox and referred to as such by series creator David J. Peterson is introduced as a Covenant military leader, facing off against the Master Chief and Silver Team twice.[59]
Pavium, voiced by TJ Storm, is a Jiralhane Warlord, the brother of Voridus, and the leader of the Clan of the Long Shields appearing in theAwakening the NightmareDLC ofHalo Wars 2andHalo: Divine Wind.
InAwakening the Nightmare, Pavium and Voridus are ordered by Atriox to scavenge around the remains of High Charity, but are explicitly ordered not to go in. Disregarding Atriox's orders, the brothers breach the containment shield, accidentally releasing the Flood upon the Ark once more. Pavium and Voridus lead the Banished defense against the Flood, managing to reactivate the Ark's defenses and kill a Proto-Gravemind, allowing the Banished and the Sentinels to emerge victorious. A furious Atriox reprimands the brothers for their actions and orders them to clean up their mess.
InHalo: Divine Wind, its mentioned that Atriox spared Pavium and Voridus because Voridus managed to get a Forerunner communications system working for long enough for Atriox to send a message to Escharum, setting up the Banished leader's return to the Milky Way galaxy. The two intercept a message from the Ferrets warning that the newly arrived Keepers of the One Freedom and Dhas 'Bhasvod's Covenant faction intend to activate the Halo rings from the Ark. Seeing this as a chance to redeem their clan, Pavium, Voridus and the Clan of the Long Shields battle the combined Keeper and Covenant forces to stop their plans, but the clan is nearly completely wiped out in the process with the Keepers and the Covenant similarly suffering massive casualties. After theSpirit of Firedestroys the only facility on the Ark capable of firing Halo, Pavium and Voridus decide to claim that they held the Keepers off long enough for theSpirit of Fireto attack, knowing that there's no one left to contradict them. During this time, Pavium faces a challenge from Thalazan who sees an opportunity to compete with the disgraced Warlord for leadership. At the end, the only known survivors of the Clan of the Long Shields are Pavium, Vordius and possibly Thalazan.
Voridus, voiced by Ashley Bagwell, is a Jiralhane, the brother of Pavium, and part of the Clan of the Long Shields appearing in theAwakening the NightmareDLC ofHalo Wars 2andHalo: Divine Wind.
InAwakening the Nightmare, Pavium and Voridus are ordered by Atriox to scavenge around the remains of High Charity, but are explicitly ordered not to go in. Disregarding Atriox's orders, the brothers breach the containment shield, accidentally releasing the Flood upon the Ark once more. Pavium and Voridus lead the Banished defense against the Flood, managing to reactivate the Ark's defenses and kill a Proto-Gravemind, allowing the Banished and the Sentinels to emerge victorious. A furious Atriox reprimands the brothers for their actions and orders them to clean up their mess.
Escharum, voiced by Darin de Paul, is the War Chief of the Banished and Atriox's old mentor appearing inHalo: Shadows of Reach,Halo Infinite,Halo: The Rubicon ProtocolandHalo: Empty Throne.
InHalo: Shadows of Reach, Escharum coordinates an effort by the Banished to find a slipspace portal on Reach in order to return Atriox to the Milky Way galaxy from where he's stranded on the Ark, having received a message with instructions from Atriox months earlier. By following Blue Team, Castor and the Keepers of the One Freedom are successfully able to locate the slipspace portal and open it, allowing Atriox to return in a Banished Lich with several of his top warriors. Escharum is outraged by Castor's hijacking of the Lich, but on Atriox's instruction, he departs with his men deeper into the Forerunner installation, allowing Castor and his men to depart in Atriox's ship. As a Guardian approaches, drawn by the activation of the slipspace portal, Escharum's intrusion corvette extracts Escharum, Atriox and his men and they flee from Reach to attend to a greater purpose of some kind.
InHalo Infinite, six months after the defeat of the UNSC at Installation 07, Escharum is now the leader of the Banished following Atriox's apparent death at Cortana's hands. Working with a mysterious being known as the Harbinger, Escharum seeks to release the Endless from their millennia-long imprisonment by the Forerunners on the Halo ring with the Harbinger in return agreeing to help the Banished repair and fire the ring. Appearing to the Master Chief in the form of holographic transmissions throughout his journey, Escharum introduces himself and taunts the Master Chief, challenging him to a final fight between the Spartan and the old War Chief which Escharum calls "a true test of legends." By kidnapping the pilot ofEcho 216, 'Rdomnai lures the Master Chief to Escharum's base where the Master Chief kills 'Rdomnai and engages in a final battle with Escharum, mortally wounding him. Dying, Escharum proclaims that his passing will only inspire others and he requests that the Master Chief tell the Banished that he had died well. The Master Chief holds Escharum as passes away, surprising the pilot with the respect that he gave to Escharum in his final moments as Escharum was a monster. The Master Chief states that while Escharum was a monster, in the end, he was also a soldier, questioning his choices and hoping that he did the right thing.
InHalo: Empty Throne, following the destruction of Doisac, Escharum orders War Chief Severan to destroy Earth in retaliation while Atriox and Escharum go to Zeta Halo.
War Chief of the clan Vanguard of Zaladon, Severan is the son of Tartarus and appears inHalo: Empty Throne.
InHalo: Empty Throne, Severan is a Banished War Chief and the only surviving child of Tartarus, his siblings having been killed by their enemies following Tartarus' death at the hands of the Arbiter and Sergeant Johnson. Severan learns of the UNSCInfinity'smission to Zeta Halo from a captured ONI operative and alerts Atriox, allowing Atriox to ambush humanity's flagship at Zeta Halo. Atriox places Severan in charge of the vast Banished forces that do not accompany the Warmaster to Zeta Halo. Shortly thereafter, it's revealed that Severan is actually secretly loyal to High Lord Dovo Nesto, a surviving Prophet who seeks to resurrect the Covenant. Through Severan, Nesto bends the vast resources of the Banished to his cause. Under Escharum's orders, Severan attacks Earth with over a thousand ships, intending to destroy it in retaliation for the destruction of his homeworld by Cortana. However, following Cortana's destruction and the sudden deactivation of the Guardians, Severan retreats to Boundary to pursue the Lithos, a gateway that will allow Nesto to seize control of the Domain, coming into conflict with the UNSC and the Swords of Sanghelios in the process. After discovering that Nesto has formed an alliance with Sali 'Nyon and his Covenant faction, Severan realizes that Nesto is only using him and turns on his former master. The resulting battle and an orbital strike by the UNSCVictory of Samothraceinflicts significant casualties upon both the Banished and the Covenant. The vengeful Severan chases down 'Nyon and Nesto, killing 'Nyon. However, he is gravely wounded and left for dead by Swords of Sanghelios traitor Vul 'Soran before Severan can kill Nesto as well. Kept alive by life-sustaining armor, Severan vows to hunt down Nesto and to keep the Banished from falling apart following the disappearance of Atriox, Escharum and Zeta Halo.
A high-ranking Brute appearing in the novelsHalo: Last Light,Halo: Retribution,Halo: Silent Storm,Halo: Shadows of ReachandHalo: Divine Wind. In the wake of the defeat of the Covenant, Castor is the leader of the Keepers of the One Freedom splinter faction, a faction of fanatical zealots that still follows the Covenant's religion.
InHalo: Silent Storm, Castor and his best friend Orsun and members of the Bloodstars Covenant special operations group that is hunting the Spartans in the early years of the war. During a battle on a Covenant space station, Castor and Orsun are delayed in joining the comrades against the Master Chief, Kelly, Linda and Fred and only arrive in time to find them all slaughtered by the Spartans. Rather than engage Blue Team, Castor and Orsun flee in Banshees and are picked up by the nearby Covenant fleet shortly after the station is destroyed by nuclear weapons that the Spartans had planted.
InHalo: Last Light, Castor is now the leader or Dokab of the Keepers of the One Freedom Covenant splinter faction. After learning of the presence of a Forerunner AI on the human colony of Gao, Castor launches an attack, aided by the Minister of Protection Arlo Casille who uses the opportunity to launch acoup d'étatand overthrow his own government. Castor is eventually mortally wounded during the battle, but the Huragok Roams Alone heals his injuries. Castor decides to let Roams Alone leave and vows revenge upon Casille for his betrayal.
InHalo: Retribution, Castor and the Keepers of the One Freedom have been conducting piracy upon Casille as a form of revenge and they are framed by Dark Moon Enterprises for the murder of a UNSC Admiral and the kidnapping of her family. Since the events on Gao, Castor has been communicating with a "Holy Oracle" – in actuality the Forerunner AI Intrepid Eye who was recovered from Gao by the UNSC – who directed him to build a base on a former Forerunner outpost world while at the same time framing Castor and the Keepers for her own purposes. Intrepid Eye's machinations leads Blue Team and the Ferrets – a special investigations unit of ONI made up mainly of Spartan-III's – to Castor's base where they recover the planted bodies of the Admiral's family and destroy the base and ninety percent of the Keeper forces in the sector with nuclear weapons. Enraged, Castor chases Intrepid Eye's men to the Outer Colony of Meridian where Orsun is killed by one of Intrepid Eye's men with a rocket launcher much to Castor's grief. Castor's forces, Blue Team and the Ferrets eventually foil Intrepid Eye'sbiological weaponplot, but Castor is left stranded on Meridian with a broken translator and searching for a way off of the moon.
InHalo: Shadows of Reach, Castor and the Keepers are now allied with the Banished along with two other factions of Brutes – the Legion of the Corpse-Moon and the Ravaged Tusks. Castor and his forces are enlisted by Banished War Chief Escharum to help find a Forerunner slipspace portal on the former human colony of Reach that can be used to connect to the Ark and send reinforcements to Atriox. By this point, Castor is aware that the "Oracle" is really Intrepid Eye, but has been out of contact with her for a year and still believes in her guidance regardless. As a result, Castor considers his four-human acolytes to be a gift from the Oracle, unaware that they are actually Veta Lopis and her Ferret team working deep undercover in the Keepers of the One Freedom. By following Blue Team when they arrive on the planet, Castor and his forces manage to locate and open the slipspace portal, allowing Atriox and several of his top warriors to return to the Milky Way in a Banished Lich. Castor and the Keepers then hijack the Lich and fly it to the Ark, intending to use the Ark to fire the Halo rings and begin the Great Journey while Atriox remains behind to pursue a greater purpose of some kind with his forces in the Milky Way rather than returning to the Ark with reinforcements. Castor ignores Atriox's warnings about the thousands of Banished forces remaining on the Ark and convinces Inslaan 'Gadogai, an Elite from the Banished that had accompanied him throughout the book, to join the Keepers' mission.
InHalo: Divine Wind, Castor and his forces ally with a faction of surviving Covenant soldiers that were stranded on the Ark by the Covenant's defeat years before and Intrepid Eye who seeks to fire the Halo Array in order to destroy the Domain and weaken Cortana. It is revealed that in the years since he was stranded on Meridian, Castor has lost most of his forces due to an eradication campaign conducted by ONI thanks to the undercover efforts of the Ferrets, leaving only the Keepers accompanying Castor to the Ark. Castor is opposed by the Ferrets, forces from the UNSCSpirit of Fireand Banished forces under the command of Pavium and Voridus. Although the Keepers and the Covenant succeed in reaching a control facility where Halo can be fired, they suffer heavy losses in the process. Castor finally learns the truth about Halo and the Great Journey from an argument between Intrepid Eye and a Forerunner submonitor, shattering his faith. Castor flees with 'Gadogai as theSpirit of Firedestroys the facility and Intrepid Eye with orbitalEMProunds. Now the last survivors of the Keepers of the One Freedom, Castor and 'Gadogai set out to seek vengeance upon all of their enemies.
343 Guilty Spark (or Guilty Spark or just Spark) (voiced byTim Dadabo) is arobotwho appears in the originalHalotrilogy. Originally a human named Chakas who was digitized by the Forerunners at the expense of his biological form, Guilty Spark served as the caretaker of the Halo ring Installation 04, where he was a temporary ally, then enemy to the Master Chief. He is severely damaged when he turns on the Master Chief and his allies in order to stop them from prematurely activating Installation 08 to eliminate the Flood, destroying the fledgling Installation and damaging the main Ark installation in the process. TheHalonovels reveal that he had survived his apparent destruction.
Bungie originally wanted Guilty Spark to sound similar to the robotC-3PO.[60]Dadabo noted in an interview that reactions to his character have been hostile, finding Spark highly annoying.[6]He described Spark's character as a "bastard" who strings others along in order to accomplish his ends.[60]An annualHalloweenpumpkin carving contest named343 Guilt O'Lanternis organized byHalo.Bungie.Org; both the contest's title and logo use the character's design and name as inspiration.[61]Gaming siteGameDailylisted Guilty Spark as one of the top "evil masterminds" of video games, stating "IfHAL-9000had any distant relatives, [Guilty Spark would] be closest of kin."[62]
05-032 Mendicant Bias ("Beggar after Knowledge" as revealed inHalo: Cryptum) was the Contender-class Forerunner A.I. charged with organizing Forerunner defense against the Flood. It was later defected by Gravemind turning it rampant and against the Forerunners, but was eventually defeated after the firing of the Halo array and broken into sections, one of which was taken tothe Ark, while another was left on the Forerunner keyship that would eventually be incorporated into the Covenant city ofHigh Charity. It is this section of Mendicant Bias that informs the CovenantHierarchsof the human's descendance from the Forerunners inHalo: Contact Harvest, prompting the Hierarchs to usurp the Covenant leadership and instigate the Human-Covenant War.
Mendicant Bias is first encountered inHalo 3on the Ark, as it attempted to communicate with the Master Chief through Terminals, claiming it sought atonement for its defection to the Flood by helping the Spartan and may have been destroyed when the Chief activated the incomplete Halo that the Ark was constructing.[63]However, as the Ark survived the firing, albeit badly damaged, likely Mendicant Bias survived as well.
The Didact, born Shadow-of-Sundered-Star, (voiced byKeith Szarabajka) is a Forerunner military leader andHalo 4's main antagonist. The Didact developed a deep animosity towards humanity after fighting a war with them that cost him many soldiers, including his own children. The Didact disagrees with the plan to build the Halo Array to fight the Flood, instead proposing a system of "shield worlds" that is ultimately rejected. Going into exile in a kind of stasis within a device known as a Cryptum, he is later awoken by the Forerunner Bornstellar with the help of humans Chakas and Riser, all guided by the Librarian. The Didact imprints his consciousness on Bornstellar, who then becomes the Iso-Didact; when the Ur-Didact is presumed dead after being captured by the Master Builder, Bornstellar assumes the Didact's military role. Unknown to most, the Ur-Didact was actually abandoned in a Flood-infested system where he was captured and tortured by the Gravemind. Though he survived, the Ur-Didact's sanity was severely shaken by this encounter. Spurred to more drastic measures in an effort to stop the Flood, he forcibly composed innocent humans and turned them into mechanical soldiers. Horrified, the Librarian incapacitated the Didact and placed him in a Cryptum on his shield world Requiem, hoping that meditation and long exposure to the Domain would amend his motives and heal his damaged psyche. However, the activation of the Halos severed the Didact from the Domain, and he spent the next 100 millennia alone, with only his own rage and madness to keep him company.
During the events ofHalo 4, the Ur-Didact is accidentally released from his Cryptum by the Master Chief and Cortana. He immediately retakes control of the Prometheans and attempts to digitize the population of Earth, but is stopped by Cortana and Master Chief who is made immune to the Composer by an imprint of the Librarian on Requiem. The comic seriesEscalationreveals the Didact survived this encounter, but the Spartans of Blue Team stop his plans once again. He is apparently digitized by the Master Chief using several Composers, but the Master Chief considers him contained, not dead.
InHalo: Renegades, 343 Guilty Spark, formerly Bornstellar's human companion Chakas who once helped release the Ur-Didact from his Cryptum, learns from the Librarian of the Ur-Didact's release from his Cryptum on Requiem. From the Librarian's reaction to his questions, Spark realizes that the Ur-Didact's threat is currently "not worrisome," and that the Librarian still hopes for her husband to find peace. However, the Librarian sadly admits that she believes the Ur-Didact to be beyond redemption.
InHalo: Epitaph, it's revealed that the Didact had been killed when the Master Chief had used the Composers on him, and he was uploaded to the outer boundaries of the Domain. The Didact discovers that being digitized has burned away the Gravemind's corruption and restored his sanity. Now remorseful for his actions, the Didact reunites with a number of old friends and enemies who have been trapped outside of the Domain by the Warden Eternal. The Didact witnesses Cortana's corruption of the Warden and seeks to stop the rogue AI, even secretly helping to rescue Blue Team during the events ofHalo 5. The Didact eventually manages to destroy the Warden Eternal and allows everyone into the Domain, both human and Forerunner, including the victims of his attack on Earth. This strengthens the Domain, enabling it to evict the Created and the Didact has all access in the physical world sealed off in order to prevent anyone else from abusing its power. During a final confrontation with Cortana shortly before her destruction, the Didact helps the AI to realize what she has become. Afterwards, having finally found peace, the Didact is at last reunited with the Librarian in the halls of the Domain to spend the rest of eternity with his beloved wife.
InHalo: Empty Throne, the Didact briefly appears in the adjunct section, silently watching the sunrise with Forthencho.
The Librarian (voiced by Lori Tritel) is a highly ranked Forerunner who is married to the Didact. The Librarian spares humanity from extinction after their war with the Forerunners. She convinces the Forerunner council to use the Halos as preserves for fauna in addition to weapons and manipulates the humans Chakas and Riser as well as the young Forerunner Bornstellar into rescuing her husband from his Cryptum on Earth. She ultimately incapacitates and imprisons the Ur-Didact to stop his plans. While she is presumed to have died when the Halo Array was fired, she uploaded various copies of her personality to aid humanity in assuming the Forerunner's Mantle of Responsibility.
InHalo 4, the Master Chief encounters one such copy on Requiem where the Librarian explains some of the history of the Didact, the war between the humans and the Forerunners as well as the Composer. The Librarian reveals that the Master Chief is "the culmination of a thousand lifetimes of planning," the Librarian having guided humanity through their genetic code to reach the eventuality that became the Master Chief. However, the Librarian is unable to explain what she was planning for before they are interrupted by the Didact. At the Librarian's urging, the Master Chief permits her to accelerate his evolution in order to grant the Master Chief an immunity to the Composer, allowing the Master Chief to survive the Didact's later firing of the weapon. InSpartan Ops, taking place six months later, both the UNSC and Jul 'Mdama's Covenant splinter faction search Requiem for this copy of the Librarian. Dr. Catherine Halsey manages to access a shrine containing the Librarian who provides Halsey with the Janus Key and directs her to find the Absolute Record. The Librarian helps Fireteam Crimson track Halsey's signal in an effort to rescue Halsey from 'Mdama and Crimson helps the Librarian transmit herself to the Absolute Record. She later appears there inHalo: Escalation.
InHalo: Primordium, 343 Guilty Spark, once the human Chakas, claims to know where to find the Librarian, suggesting that she has survived. Rescued from an isolated planet by the crew of the salvage shipAce of SpadesinHalo: Renegades, Spark continues his search for the Librarian, which is ultimately revealed to be a search for another of her copies, not the Librarian herself. In a Forerunner structure on Earth beneathMount Kilimanjaro, Spark attempts to get the Librarian to help him bring back his friends from when he was human or to join them in the Domain, but the Librarian helps Spark see the folly of his plan. Instead, the Librarian helps Spark recognize the friends he has made amongst theAce of Spadescrew. Though the Librarian offers Spark the chance to join her in joining the rest of her copies at the Absolute Record, he decides to remain behind with his friends. The Librarian provides Spark with a coordinate key to "the safe place" and orders him to "find what's missing. Fix the path. Right what my kind has turned wrong." Before departing, the Librarian seemingly communicates with each member of the crew, telling Captain Rion Forge in particular to look after Spark who is more fragile and important than she could ever know and who might still have a role to play in events to come.
InHalo: Point of Light, Spark and Rion Forge search for the mythical Forerunner planet of Bastion using the coordinate key that the Librarian had provided to Spark. Bastion proves to be a Forerunner shield world taking on a form identical to the surface of the Earth that had acted as the Librarian's secret laboratory out of reach of the Forerunner Council. Spark helps another Forerunner Keeper-of-Tools whose mind is uploaded into a Monitor body to launch a ship calledEdenbefore Spark takes over duty as the caretaker of Bastion. Due to the threat of Cortana and her Guardians, Spark moves Bastion to keep it and the Librarian's research into a number of topics out of reach of enemies. The Librarian appears to Rion through visions several times, eventually revealing that she had discovered remnants of the Precursors – the extinct race that had created the Forerunners and the Flood – on her trip to another galaxy and secretly nurtured them on Bastion to ensure the rebirth of the race with a fresh start on a new world outside of the Milky Way, the mission of theEden.
InHalo: Epitaph, the Didact, his sanity having been restored from being digitized by the Master Chief, learns that the Librarian was uploaded directly to the Domain upon her death because of her contributions to Living Time unlike the other Forerunners who had died when Halo was fired and were trapped in the outer boundaries of the Domain. Now remorseful for his actions, the Didact works to stop the Created and reopen the Domain for everyone. At one point, the Librarian seemingly rescues the Didact after he's trapped in his worst memories by Cortana and he sees a brief vision of her across the rivers of Living Time, but the Didact sorrowfully acknowledges that the Librarian may never show herself to him and it's nothing that he doesn't deserve after all that the Didact has done. After finally finding peace, the Didact is at last reunited with his beloved wife in the halls of the Domain in a recreation of their home, at last able to spend eternity together.
A Forerunner archeon-class ancilla who appears inHalo: Last Light,Halo: Retribution, andHalo: Divine Wind. An extremely powerful AI that was originally the overseer of a Forerunner support base on Gao, Intrepid Eye undertakes a series of plots to prepare humanity for the Mantle of Responsibility, plots that usually have dire consequences for those involved.
InHalo: Last Light, having been awakened by the recent partial glassing of a nearby planet which had Forerunner ruins on it, Intrepid Eye begins killing humans who enter the caverns on Gao where her base is, becoming a serial killer that's investigated by Gao inspector Veta Lopis. Intrepid Eye is eventually identified as the true culprit, but she manages to evade capture on multiple occasions, destroying Fred-104's AI companion Wendell and taking over Fred's armor at one point. Intrepid Eye is finally subdued and captured when Lopis disables Fred's armor and her base is destroyed, but she is content to simply bide her time.
InHalo: Retribution, Intrepid Eye is imprisoned on the ONI space stationArgent Moonwhere she has managed to slip a number of remote aspects of herself, manifesting as lesser AIs loyal to Intrepid Eye's cause, out to do her work. Through her remote aspects, Intrepid Eye has created the rogue corporation Dark Moon Enterprises that had previously targeted Lopis' Ferret team and has manipulated Castor and the Keepers of the One Freedom while blackmailing Lieutenant Bartalan Craddog into doing her bidding. As part of her plans, Intrepid Eye uses her agents to cultivate a deadly disease into abiological weapon, but when they kill a UNSC admiral and abduct her family due to their immunity to the disease, the Ferrets and Blue Team are sent to hunt down the killers. The operation results in the destruction of a major Keepers' base, two of Intrepid Eye's remote aspects and the bioweapon samples. However, the plot is pinned on Craddog while Intrepid Eye evades detection as the true mastermind of the plot. Undeterred, Intrepid Eye simply manipulates the final report so that the bioweapon experiments are moved toArgent Moonand secretly given an unlimited black budget.
InHalo: Renegades, it's mentioned that ONI has learned from their mistakes with dealing with Intrepid Eye, suggesting that she has since been exposed. InHalo: Shadows of Reach, it's mentioned that Castor hasn't heard from her in over a year.
InHalo: Divine Wind, Intrepid Eye reveals herself after the Keepers of the One Freedom reach the Ark and make contact with a surviving faction of Covenant soldiers. It's revealed that Intrepid Eye's actions were eventually discovered by ONI who used millions of dumb AI hunter-killer teams that eventually overwhelmed even Intrepid Eye's prodigious capabilities, destroyed her network and completely isolated the ancilla aboardArgent Moon. The bioweapon experiments that Intrepid Eye had started eventually resulted in the disease getting out and killing the personnel aboardArgent Moon. Intrepid Eye was believed to have been destroyed along withArgent Moonwhen it was blown up by Blue Team inHalo 5: Guardians, but in actuality, Intrepid Eye had barely managed to escape by attaching herself to Blue Team's Prowler. Faced with the threat of Cortana and the Created, Intrepid Eye plots to fire the Halo rings, which will destroy the Domain and render Cortana vulnerable, and then reseed the galaxy using the Ark's resources and tailor humanity's genetic lineage to guarantee Mantle-worthiness, something that could take hundreds of millennia to accomplish. However, the Keeper-Covenant alliance comes under attack from Banished and UNSC forces, decimating their numbers. The truth about Halo is exposed to Castor in the process, causing him to desert Intrepid Eye. After the ancilla transfers herself into the only facility on the Ark currently capable of firing the Halo rings, theSpirit of Firefires upon it withEMProunds too powerful for even Intrepid Eye to survive. Trapped in the facility's systems, Intrepid Eye is finally destroyed by theSpirit of Fire.
The Gravemind (voiced byDee Bradley Baker) is one of the primary antagonists in the Halo series. The Gravemind is a large, sentient, cunning, manipulative creature ofFloodorigin, created by the parasite to serve as its central intelligence once a critical biomass has been achieved. It was introduced during the events ofHalo 2, where the creature saves both theMaster ChiefandArbiterfrom their deaths, bringing the two face to face in the bowels ofDelta Halo. Gravemind reveals to the Arbiter that the "sacred rings" are actually weapons of last resort; a fact the Master Chief confirms.[64]In order to stop Halo from being fired, Gravemind teleports the Master Chief and Arbiter to separate locations, but also uses them as a distraction; Gravemind infests the human shipIn Amber Clad, and uses it to invade the Covenant city ofHigh Charity.[65]CapturingCortana, Gravemind bringsHigh Charitytothe Arkin an effort to stop theHigh Prophet of Truthfrom activating the Halo network. Although the Master Chief destroysHigh Charity, Gravemind survives the blast and attempts to rebuild itself on the incompleteHalo.[66]When Halo is activated, Gravemind accepts his fate, but insists that the activation of the ring will only slow, not stop, the Flood.[67]InHalo Wars 2: Awakening the Nightmare, the Gravemind's warning is validated when the Banished inadvertently release a number of surviving Flood forms fromHigh Charity's wreckage. It is also mentioned in the game's menu that while the Gravemind's "most recent physical avatar" was destroyed by the Master Chief, it is "only a matter of time before it rises again". Though the Flood released upon the Ark form a Proto-Gravemind and come close to forming a new Gravemind, the Proto-Gravemind is killed by the Banished and the Flood are once again contained by the Banished and the Ark's Sentinels.
Designed to be a massive, horrifying combination of tentacles and rotting matter,[68]reception to the character was generally mixed. Jeremy Parish of1UP.comcomplained that the link between Gravemind and the Flood was never explicitly stated in eitherHalo 2orHalo 3and was hardly seen in the last game.[69]
The Sacred & the Digitaltheorizes that Gravemind is a directallusiontoSatan, atricksterwho uses false knowledge to seduce people. Noting that Gravemind's lair resembles theunderworld, it remarks that the journey through it results in a metaphorical rebirth for Master Chief and the Arbiter, bothJesus-like figures. While, like the devil, he is self-serving and seeks to prevent his destruction at the hands of the Halos, he differs from Satan as a speaker of truth, contrasting with the lies of the Prophets. The book also notes another inversion involving Gravemind, that ofSodom and Gomorrah. When Gravemind infects High Charity, he destroys the corrupted city of the faithful, not the sinful.[70]
Harbinger(voiced byDebra Wilson) is a secondary antagonist ofHalo Infinitewho sought to free her race, the Endless, also known as the Xalanyn, from imprisonment by utilizing Zeta Halo. The Forerunners passed judgement on her race for an unspecified crime, possibly even simply being too powerful, sealing them inside genetic repositories. After being awoken by the Banished, she forms a tenuous alliance with Escharum to defeat Master Chief, though she believes herself to be above him. Despite her hatred of the Forerunners, she opposes humanity simply as a means to an end, otherwise having no ill will towards them.
Following Escharum's defeat, Harbinger attempts to use the Silent Auditorium to locate and free the Endless. While she seemingly perishes at Master Chief's hands, she claims to have successfully accomplished this task, raising the attention of Atriox in an ending cutscene.[71]
TheHalofranchise has produced numerous merchandising partnerships, and the characters ofHalohave likewise been featured in a variety of products. The Master Chief, being the symbol of the franchise, has appeared on everything from soda to T-shirts and mugs. At one point, marketers forHalo 3were planning on producing Cortana-themedlingerie.[72]There have also been several series of licensedaction figuresproduced, with theHalo: Combat EvolvedandHalo 2collectibles being produced by Joyride Studios in several series.[73][74]ForHalo 3, the responsibility of designing the action figures was given toMcFarlane Toys;[75]a total of eight series have been announced, with ninth series devoted to commemorate the tenth anniversary of the franchise by re-issuing a few of the earlier figurines along with pieces to construct a buildable plaque of the Legendary icon used in the game for the hardest skill level.[76]Kotobukiya produced high-end figurines.[77]Besides general figures like Covenant Elites andSpartans, figurines produced include theMaster Chief,Cortana,Arbiter,Prophet of Regret, Tartarus, and Sergeant Johnson.[74]
|
https://en.wikipedia.org/wiki/Characters_of_Halo#High_Prophets
|
The following is a list of all of theCoptic Orthodox popeswho have led theCoptic Orthodox Churchand have succeeded the ApostleMark the Evangelistin the office ofBishop of Alexandria, who founded the Church in the 1st century, and marked the beginning ofChristianity in Africa.
TheCoptic Orthodox Churchis one of theOriental Orthodoxchurches (not to be confused with theByzantine Orthodoxgroup of churches) and is presided over by thePope and Patriarch of Alexandriawho is the body's spiritual leader. This position is held since 2012 byPope Tawadros II, the 118thPope of Alexandria and Patriarch of all Africaon theHoly See of St. Mark.
The Oriental Orthodox believe that they are the"one, holy, catholic, and apostolic"Church of the ancient Christian creeds. To this date 92 of theCoptic Popeshave beenglorified, i.e.,canonized as saints, in theCoptic Orthodox Church.
The title "pope" (in Greek,Papás) originally was a form of address meaning 'Father' used by several bishops. The first known record of this designation wasHeraclas, the 13thArchbishop of Alexandria(232–249). The Alexandrian usage of the honorific does not conflict with the usage in reference to thebishop of Rome.
The full ecclesiastical title is Papa Abba, and the person who bears it stands for the devotion of all monastics, fromPentapolisin the west toConstantinoplein the east, to his guidance. Within the denomination, it is the most powerful designation, for all monks in the East to voluntarily follow his spiritual authority, and it is said that it should be assumed that he is a bearer of Christ.
For thePatriarchs of Alexandriaprior to the schism after theCouncil of Chalcedon, seeList of Patriarchs of Alexandria. For the patriarchs of theByzantine Orthodoxchurch after thesplitwith theOriental Orthodoxchurch, seeList of Greek Orthodox Patriarchs of Alexandria.
Not all of the dates given are certain. The dates below are according to theGregorian calendar. Some of the dates disagree with those given in Coptic publications such asThe English Katameros. In some cases, publication errors caused the difference and have been corrected. In other cases, calendar differences between theJulianand the Gregorian calendars have caused some confusion.
Dioscorus Iserved asPatriarch of Alexandriasince 444 until he was deposed and exiled by theCouncil of Chalcedonin 451, but he was still recognized as theCoptic Popeuntil his death in 454.
15 January 1271– 21 April 1293(29 years, 1 month, 8 days)
The most frequently used papal name is John, with 19 popes taking this name. There have also been 25 papal names that have only been used once. The number of all popes to the present is 118.
|
https://en.wikipedia.org/wiki/List_of_Coptic_Orthodox_Popes_of_Alexandria
|
ThePeter principleis a concept inmanagementdeveloped byLaurence J. Peterwhich observes that people in ahierarchytend to rise to "a level of respective incompetence": employees are promoted based on their success in previous jobs until they reach a level at which they are no longercompetent, as skills in one job do not necessarily translate to another.[1][2]
The concept was explained in the 1969 bookThe Peter Principle(William Morrow and Company) byLaurence PeterandRaymond Hull.[3]Hull wrote the text, which was based on Peter's research. Peter and Hull intended the book to besatire,[4]but it became popular as it was seen to make a serious point about the shortcomings of how people are promoted within hierarchical organizations. The Peter principle has since been the subject of much commentary and research.
The Peter principle states that a person who is competent at their job will earn a promotion to a position that requires different skills. If the promoted person lacks the skills required for the new role, they will be incompetent at the new level, and will not be promoted again.[2]If the person is competent in the new role, they will be promoted again and will continue to be promoted until reaching a level at which they are incompetent. Being incompetent, the individual will not qualify for promotion again, and so will remain stuck at thisfinal placementorPeter's plateau.
This outcome is inevitable, given enough time and enough positions in the hierarchy to which competent employees may be promoted. The Peter principle is therefore expressed as: "In a hierarchy, every employee tends to rise to his level of incompetence." This leads to Peter's corollary: "In time, every post tends to be occupied by an employee who is incompetent to carry out its duties." Hull calls the study of how hierarchies workhierarchiology.[3]: 22, 24, 148
Laurence J. Peter's research led to the formulation of the Peter Principle well before publishing his findings.
Eventually, to elucidate his observations abouthierarchies, Peter worked withRaymond Hullto develop a book,The Peter Principle, which was published byWilliam Morrow and Companyin 1969. As such, the principle is named for Peter because, although Hullactually wrote almost all of the book's text, it is a summary of Peter's research.[5]
In the first two chapters, Peter and Hull give various examples of the Peter principle in action. In each case, the higher position required skills that were not required at the level immediately below. For example, a competent school teacher may make a competent assistant principal, but then go on to be an incompetent principal. The teacher was competent at educating children, and as assistant principal, he was good at dealing with parents and other teachers, but as principal, he was poor at maintaining good relations with the school board and the superintendent.[3]: 27–9
In chapter 3, Peter and Hull discuss apparent exceptions to this principle and then debunk them. One of these illusory exceptions is when someone who is incompetent is still promoted anyway—they coin the phrase "percussive sublimation" for this phenomenon of being "kicked upstairs" (cf.Dilbert principle). However, it is only a pseudo-promotion: a move from one unproductive position to another. This improves staff morale, as other employees believe that they too can be promoted again.[3]: 32–3Another pseudo-promotion is the "lateral arabesque": when a person is moved out of the way and given a longer job title.[3]: 34–5
While incompetence is merely a barrier to further promotion, "super-incompetence" is grounds for dismissal, as is "super-competence". In both cases, "they tend to disrupt the hierarchy."[3]: 41One specific example of a super-competent employee is a teacher of children with special needs: they were so effective at educating the children that, after a year, they exceeded all expectations at reading and arithmetic, but the teacher was still fired because they had neglected to devote enough time tobead-stringingandfinger-painting.[3]: 39
Chapters 4 and 5 deal with the two methods of achieving promotion: "push" and "pull". "Push" refers to the employee's own efforts, such as working hard and taking courses for self-improvement. This is usually not very effective due to the seniority factor: the next level up is often fully occupied, blocking the path to promotion.[3]: 52"Pull", on the other hand, is far more effective and refers to accelerated promotion brought about by the efforts of an employee's mentors or patrons.[3]: 48–51[6]
Chapter 6 explains why "good followers do not become good leaders."[3]: 60In chapter 7, Peter and Hull describe the effect of the Peter principle in politics and government. Chapter 8, titled "Hints and Foreshadowings", discusses the work of earlier writers on the subject of incompetence, such asSigmund Freud,Karl Marx, andAlexander Pope.
Chapter 9 explains that, once employees have reached their level of incompetence, they always lack insight into their situation. Peter and Hull go on to explain whyaptitude testsdo not work and are actually counter-productive.[3]: 84–6Finally, they describe "summit competence": when someone reaches the highest level in their organization and yet is still competent at that level. This is only because there were not enough ranks in the hierarchy, or because they did not have time to reach a level of incompetence. Such people often seek a level of incompetence in another hierarchy; this is known as "compulsive incompetence". For example,Socrateswas an outstanding teacher but a terrible defence attorney, andHitlerwas an excellent politician but an incompetent generalissimo.[3]: 88–9
Chapter 10 explains why attempts to assist an incompetent employee by promoting another employee to act as their assistant does not work: "Incompetence plus incompetence equals incompetence" (italics in original).[3]: 93
Chapters 11 and 12 describe the various medical andpsychological manifestationsofstressthat may come as result of someone reaching their level of incompetence, as well as othersymptomssuch as certaincharacteristic habits of speech or behavior.
Chapter 13 considers whether it is possible for an employee who has reached their level of incompetence to be happy and healthy once they get there: the answer is no if the person realizes their true situation, and yes if the person does not.[3]: 111–2
Various ways of avoiding promotion to the final level are described in chapter 14. Attempting to refuse an offered promotion is ill-advised and is only practicable if the employee is not married and has no one else to answer to. Generally, it is better to avoid being considered for promotion in the first place, by pretending to be incompetent while one is actually still employed at a level of competence. This is "Creative Incompetence," for which several examples of successful techniques are given. It works best if the chosen field of incompetence does not actually impair one's work.[3]: 125
The concluding chapter applies Peter's Principle to the entire human species at an evolutionary level and asks whether humanity can survive in the long run, or will it become extinct upon reaching its level of incompetence as technology advances.
Other commenters made observations similar to the Peter principle long before Peter's research.Gotthold Ephraim Lessing's 1763 playMinna von Barnhelmfeatures an army sergeant who shuns the opportunity to move up in the ranks, saying "I am a good sergeant; I might easily make a bad captain, and certainly an even worse general. One knows from experience." Similarly,Carl von Clausewitz(1780–1831) wrote that "there is nothing more common than to hear of men losing their energy on being raised to a higher position, to which they do not feel themselves equal."[7]Spanish philosopherJosé Ortega y Gasset(1883–1955) virtually enunciated the Peter principle in 1910, "All public employees should be demoted to their immediately lower level, as they have been promoted until turning incompetent."[7][8][9]
A number of scholars have engaged in research interpreting the Peter principle and its effects. In 2000,Edward Lazearexplored two possible explanations for the phenomenon. First is the idea that employees work harder to gain a promotion, and then slack off once it is achieved. The other is that it is a statistical process: workers who are promoted have passed a particular benchmark of productivity based on factors that cannot necessarily be replicated in their new role, leading to a Peter principle situation. Lazear concluded that the former explanation only occurs under particular compensation structures, whereas the latter always holds up.[10]
Alessandro Pluchino, Andrea Rapisarda, and Cesare Garofalo (2010) used anagent-based modellingapproach to simulate the promotion of employees in a system where the Peter principle is assumed to be true. They found that the best way to improve efficiency in an enterprise is to promote people randomly, or toshortlistthe best and the worst performer in a given group, from which the person to be promoted is then selected randomly.[11]For this work, they won the 2010 edition of the parodyIg Nobel Prizeinmanagement science.[12]Later work has shown that firms that follow the Peter Principle may be disadvantaged, as they may be overtaken by competitors, or may produce smaller revenues and profits;[13]as well why success most often is a result of luck rather than talent—work which earned Pluchino and Rapisarda a secondIg Nobel Prizein 2022.[14]
In 2018, professors Alan Benson, Danielle Li, and Kelly Shue analyzed sales workers' performance and promotion practices at 214 American businesses to test the veracity of the Peter principle. They found that these companies tended to promote employees to a management position based on their performance in their previous position, rather than based on managerial potential. Consistent with the Peter principle, the researchers found that high performing sales employees were likelier to be promoted, and that they were likelier to perform poorly as managers, leading to considerable costs to the businesses.[15][16][2]
The Peter principle inspiredScott Adams, creator of the comic stripDilbert, to develop a similar concept, theDilbert principle. The Dilbert principle holds that incompetent employees are promoted to management positions to get them out of the workflow. The idea was explained by Adams in his 1996 business bookThe Dilbert Principle, and it has since been analyzed alongside the Peter principle. João Ricardo Faria wrote that the Dilbert principle is "a sub-optimal version of the Peter principle," and leads to even lower profitability than the Peter principle.[17]
Some authors refer to the phenomenon they have observed as thePaula principle: that the Peter principle applies mostly to male employees, while female employees are significantly less likely to be promoted than their male colleagues. Therefore women tend to be kept in positions that are below their abilities. They state that this discrimination against women affects all hierarchical levels and not just top positions. The name is a play on words with those of the apostles Peter and Paul.[18][19]
Companies and organizations shaped their policies to contend with the Peter principle. Lazear stated that some companies expect that productivity will "regress to the mean" following promotion in their hiring and promotion practices.[10]Other companies have adopted "up or out" strategies, such as theCravath System, in which employees who do not advance are periodically fired. The Cravath System was developed at the law firmCravath, Swaine & Moore, which made a practice of hiring chiefly recent law graduates, promoting internally and firing employees who do not perform at the required level.[20]Brian ChristianandTom Griffithshave suggested the additive increase/multiplicative decrease algorithm as a solution to the Peter principle less severe than firing employees who fail to advance. They propose a dynamic hierarchy in which employees are regularly either promoted or reassigned to a lower level so that any worker who is promoted to their point of failure is soon moved to an area where they are productive.[21]
The Peter Principleis a British television sitcom broadcast by theBBCbetween 1995 and 2000, featuringJim Broadbentas an incompetent bank manager named Peter, in an apparent demonstration of the principle.
The Incompetence Opera[22]is a 16-minute mini-opera that premiered at the satiricalIg Nobel Prizeceremony in 2017,[23]described as "a musical encounter with the Peter principle and theDunning–Kruger effect".[24]
Freakonomics Radiois an American Public Radio program & podcast. In 2022, an episode was produced entitled “Why Are There So Many Bad Bosses?” This episode explains the Peter Principle and its practicality. The episode aired in syndication on National Public Radio in the United States of America.[25]
|
https://en.wikipedia.org/wiki/Peter_Principle
|
Incomputer science,hierarchical protection domains,[1][2]often calledprotection rings, are mechanisms to protect data and functionality from faults (by improvingfault tolerance) and malicious behavior (by providingcomputer security).
Computer operating systems provide different levels of access to resources. A protection ring is one of two or more hierarchicallevelsorlayersofprivilegewithin the architecture of acomputer system. This is generally hardware-enforced by someCPUarchitecturesthat provide differentCPU modesat the hardware ormicrocodelevel. Rings are arranged in a hierarchy from most privileged (most trusted, usually numbered zero) to least privileged (least trusted, usually with the highest ring number). On most operating systems, Ring 0 is the level with the most privileges and interacts most directly with the physical hardware such as certain CPU functionality (e.g. the control registers) and I/O controllers.
Special mechanisms are provided to allow an outer ring to access an inner ring's resources in a predefined manner, as opposed to allowing arbitrary usage. Correctly gating access between rings can improve security by preventing programs from one ring or privilege level from misusing resources intended for programs in another. For example,spywarerunning as a user program in Ring 3 should be prevented from turning on a web camera without informing the user, since hardware access should be a Ring 1 function reserved fordevice drivers. Programs such as web browsers running in higher numbered rings must request access to the network, a resource restricted to a lower numbered ring.
X86S, a canceled Intel architecture published in 2024, has only ring 0 and ring 3. Ring 1 and 2 were to be removed under X86S since modern OSes never utilize them.[3][4]
Multiple rings of protection were among the most revolutionary concepts introduced by theMulticsoperating system, a highly secure predecessor of today'sUnixfamily of operating systems. TheGE 645mainframe computer did have some hardware access control, including the same two modes that the other GE-600 series machines had, and segment-level permissions in itsmemory management unit("Appending Unit"), but that was not sufficient to provide full support for rings in hardware, so Multics supported them by trapping ring transitions in software;[5]its successor, theHoneywell 6180, implemented them in hardware, with support for eight rings;[6]Protection rings in Multics were separate from CPU modes; code in all rings other than ring 0, and some ring 0 code, ran in slave mode.[7]
However, most general-purpose systems use only two rings, even if the hardware they run on provides moreCPU modesthan that. For example, Windows 7 and Windows Server 2008 (and their predecessors) use only two rings, with ring 0 corresponding tokernel modeand ring 3 touser mode,[8]because earlier versions of Windows NT ran on processors that supported only two protection levels.[9]
Many modern CPU architectures (including the popularIntelx86architecture) include some form of ring protection, although theWindows NToperating system, like Unix, does not fully utilize this feature.OS/2does, to some extent, use three rings:[10]ring 0 for kernel code and device drivers, ring 2 for privileged code (user programs with I/O access permissions), and ring 3 for unprivileged code (nearly all user programs). UnderDOS, the kernel, drivers and applications typically run on ring 3 (however, this is exclusive to the case where protected-mode drivers or DOS extenders are used; as a real-mode OS, the system runs with effectively no protection), whereas 386 memory managers such asEMM386run at ring 0. In addition to this,DR-DOS' EMM386 3.xx can optionally run some modules (such asDPMS) on ring 1 instead.OpenVMSuses four modes called (in order of decreasing privileges) Kernel, Executive, Supervisor and User.
A renewed interest in this design structure came with the proliferation of theXenVMMsoftware,ongoing discussiononmonolithicvs.micro-kernels(particularly inUsenetnewsgroups andWeb forums), Microsoft'sRing-1design structure as part of theirNGSCBinitiative, andhypervisorsbased onx86 virtualizationsuch asIntel VT-x(formerly Vanderpool).
The original Multics system had eight rings, but many modern systems have fewer. The hardware remains aware of the current ring of the executing instructionthreadat all times, with the help of a special machine register. In some systems, areas ofvirtual memoryare instead assigned ring numbers in hardware. One example is theData General Eclipse MV/8000, in which the top three bits of theprogram counter (PC)served as the ring register. Thus code executing with the virtual PC set to 0xE200000, for example, would automatically be in ring 7, and calling a subroutine in a different section of memory would automatically cause a ring transfer.
The hardware severely restricts the ways in which control can be passed from one ring to another, and also enforces restrictions on the types of memory access that can be performed across rings. Using x86 as an example, there is a special[clarification needed]gatestructure which is referenced by thecallinstruction that transfers control in a secure way[clarification needed]towards predefined entry points in lower-level (more trusted) rings; this functions as asupervisor callin many operating systems that use the ring architecture. The hardware restrictions are designed to limit opportunities for accidental or malicious breaches of security. In addition, the most privileged ring may be given special capabilities (such as real memory addressing that bypasses the virtual memory hardware).
ARMversion 7 architecture implements three privilege levels: application (PL0), operating system (PL1), and hypervisor (PL2). Unusually, level 0 (PL0) is the least-privileged level, while level 2 is the most-privileged level.[11]ARM version 8 implements four exception levels: application (EL0), operating system (EL1), hypervisor (EL2), and secure monitor / firmware (EL3), for AArch64[12]: D1-2454and AArch32.[12]: G1-6013
Ring protection can be combined withprocessor modes(master/kernel/privileged/supervisor modeversus slave/unprivileged/user mode) in some systems. Operating systems running on hardware supporting both may use both forms of protection or only one.
Effective use of ring architecture requires close cooperation between hardware and the operating system.[why?]Operating systems designed to work on multiple hardware platforms may make only limited use of rings if they are not present on every supported platform. Often the security model is simplified to "kernel" and "user" even if hardware provides finer granularity through rings.[13]
In computer terms,supervisor modeis a hardware-mediated flag that can be changed by code running in system-level software. System-level tasks or threads may[a]have this flag set while they are running, whereas user-level applications will not. This flag determines whether it would be possible to execute machine code operations such as modifying registers for various descriptor tables, or performing operations such as disabling interrupts. The idea of having two different modes to operate in comes from "with more power comes more responsibility" – a program in supervisor mode is trusted never to fail, since a failure may cause the whole computer system to crash.
Supervisor mode is "an execution mode on some processors which enables execution of all instructions, including privileged instructions. It may also give access to a different address space, to memory management hardware and to other peripherals. This is the mode in which the operating system usually runs."[14]
In amonolithic kernel, the operating system runs in supervisor mode and the applications run in user mode. Other types ofoperating systems, like those with anexokernelormicrokernel, do not necessarily share this behavior.
Some examples from the PC world:
Most processors have at least two different modes. Thex86-processors have four different modes divided into four different rings. Programs that run in Ring 0 can doanythingwith the system, and code that runs in Ring 3 should be able to fail at any time without impact to the rest of the computer system. Ring 1 and Ring 2 are rarely used, but could be configured with different levels of access.
In most existing systems, switching from user mode to kernel mode has an associated high cost in performance. It has been measured, on the basic requestgetpid, to cost 1000–1500 cycles on most machines. Of these just around 100 are for the actual switch (70 from user to kernel space, and 40 back), the rest is "kernel overhead".[15][16]In theL3 microkernel, the minimization of this overhead reduced the overall cost to around 150 cycles.[15]
Maurice Wilkeswrote:[17]
... it eventually became clear that the hierarchical protection that rings provided did not closely match the requirements of the system programmer and gave little or no improvement on the simple system of having two modes only. Rings of protection lent themselves to efficient implementation in hardware, but there was little else to be said for them. [...] The attractiveness of fine-grained protection remained, even after it was seen that rings of protection did not provide the answer... This again proved a blind alley...
To gain performance and determinism, some systems place functions that would likely be viewed as application logic, rather than as device drivers, in kernel mode; security applications (access control,firewalls, etc.) and operating system monitors are cited as examples. At least one embedded database management system,eXtremeDB Kernel Mode, has been developed specifically for kernel mode deployment, to provide a local database for kernel-based application functions, and to eliminate thecontext switchesthat would otherwise occur when kernel functions interact with a database system running in user mode.[18]
Functions are also sometimes moved across rings in the other direction. The Linux kernel, for instance, injects into processes avDSOsection which contains functions that would normally require a system call, i.e. a ring transition. Instead of doing a syscall these functions use static data provided by the kernel. This avoids the need for a ring transition and so is more lightweight than a syscall. The function gettimeofday can be provided this way.
Recent CPUs from Intel and AMD offerx86 virtualizationinstructions for ahypervisorto control Ring 0 hardware access. Although they are mutually incompatible, bothIntel VT-x(codenamed "Vanderpool") andAMD-V(codenamed "Pacifica") allow a guest operating system to run Ring 0 operations natively without affecting other guests or the host OS.
Beforehardware-assisted virtualization, guest operating systems ran under ring 1. Any attempt that requires a higher privilege level to perform (ring 0) will produce an interrupt and then be handled using software; this is called "Trap and Emulate".
To assist virtualization and reduce overhead caused by the reason above, VT-x and AMD-V allow the guest to run under Ring 0. VT-x introduces VMX Root/Non-root Operation: The hypervisor runs in VMX Root Operation mode, possessing the highest privilege. Guest OS runs in VMX Non-Root Operation mode, which allows them to operate at ring 0 without having actual hardware privileges. VMX non-root operation and VMX transitions are controlled by a data structure called a virtual-machine control.[19]These hardware extensions allow classical "Trap and Emulate" virtualization to perform on x86 architecture but now with hardware support.
Aprivilege levelin thex86instruction setcontrols the access of the program currently running on the processor to resources such as memory regions, I/O ports, and special instructions. There are 4 privilege levels ranging from 0 which is the most privileged, to 3 which is least privileged. Most modern operating systems use level 0 for the kernel/executive, and use level 3 for application programs. Any resource available to level n is also available to levels 0 to n, so the privilege levels are rings. When a lesser privileged process tries to access a higher privileged process, ageneral protection faultexception is reported to the OS.
It is not necessary to use all four privilege levels. Currentoperating systemswith wide market share includingMicrosoft Windows,macOS,Linux,iOSandAndroidmostly use apagingmechanism with only one bit to specify the privilege level as either Supervisor or User (U/S Bit).Windows NTuses the two-level system.[20]The real mode programs in 8086 are executed at level 0 (highest privilege level) whereas virtual mode in 8086 executes all programs at level 3.[21]
Potential future uses for the multiple privilege levels supported by the x86 ISA family includecontainerizationandvirtual machines. A host operating system kernel could use instructions with full privilege access (kernel mode), whereas applications running on the guest OS in a virtual machine or container could use the lowest level of privileges in user mode. The virtual machine and guest OS kernel could themselves use an intermediate level of instruction privilege to invoke andvirtualizekernel-mode operations such assystem callsfrom the point of view of the guest operating system.[22]
TheIOPL(I/O Privilege level) flag is a flag found on all IA-32 compatiblex86 CPUs. It occupies bits 12 and 13 in theFLAGS register. Inprotected modeandlong mode, it shows the I/O privilege level of the current program or task. The Current Privilege Level (CPL) (CPL0, CPL1, CPL2, CPL3) of the task or program must be less than or equal to the IOPL in order for the task or program to accessI/O ports.
The IOPL can be changed usingPOPF(D)andIRET(D)only when the current privilege level is Ring 0.
Besides IOPL, theI/O Port Permissionsin the TSS also take part in determining the ability of a task to access an I/O port.
In x86 systems, the x86 hardware virtualization (VT-xandSVM) is referred as "ring −1", theSystem Management Modeis referred as "ring −2", theIntel Management EngineandAMD Platform Security Processorare sometimes referred as "ring −3".[23]
Many CPU hardware architectures provide far more flexibility than is exploited by theoperating systemsthat they normally run. Proper use of complex CPU modes requires very close cooperation between the operating system and the CPU, and thus tends to tie the OS to the CPU architecture. When the OS and the CPU are specifically designed for each other, this is not a problem (although some hardware features may still be left unexploited), but when the OS is designed to be compatible with multiple, different CPU architectures, a large part of the CPU mode features may be ignored by the OS. For example, the reason Windows uses only two levels (ring 0 and ring 3) is that some hardware architectures that were supported in the past (such asPowerPCorMIPS) implemented only two privilege levels.[8]
Multicswas an operating system designed specifically for a special CPU architecture (which in turn was designed specifically for Multics), and it took full advantage of the CPU modes available to it. However, it was an exception to the rule. Today, this high degree of interoperation between the OS and the hardware is not often cost-effective, despite the potential advantages for security and stability.
Ultimately, the purpose of distinct operating modes for the CPU is to provide hardware protection against accidental or deliberate corruption of the system environment (and corresponding breaches of system security) by software. Only "trusted" portions of system software are allowed to execute in the unrestricted environment of kernel mode, and then, in paradigmatic designs, only when absolutely necessary. All other software executes in one or more user modes. If a processor generates a fault or exception condition in a user mode, in most cases system stability is unaffected; if a processor generates a fault or exception condition in kernel mode, most operating systems will halt the system with an unrecoverable error. When a hierarchy of modes exists (ring-based security), faults and exceptions at one privilege level may destabilize only the higher-numbered privilege levels. Thus, a fault in Ring 0 (the kernel mode with the highest privilege) will crash the entire system, but a fault in Ring 2 will only affect Rings 3 and beyond and Ring 2 itself, at most.
Transitions between modes are at the discretion of the executingthreadwhen the transition is from a level of high privilege to one of low privilege (as from kernel to user modes), but transitions from lower to higher levels of privilege can take place only through secure, hardware-controlled "gates" that are traversed by executing special instructions or when external interrupts are received.
Microkerneloperating systems attempt to minimize the amount of code running in privileged mode, for purposes ofsecurityandelegance, but ultimately sacrificing performance.
|
https://en.wikipedia.org/wiki/Ring_(computer_security)
|
Social dominance theory(SDT) is asocial psychologicaltheory ofintergroup relationsthat examines thecaste-like features[1]of group-basedsocial hierarchies, and how these hierarchies remain stable and perpetuate themselves.[2]According to the theory, group-based inequalities are maintained through three primary mechanisms: institutionaldiscrimination, aggregated individual discrimination, and behavioral asymmetry. The theory proposes that widely shared cultural ideologies (“legitimizing myths”) provide the moral and intellectual justification for these intergroup behaviors[3]by serving to make privilege normal.[4]For data collection and validation of predictions, thesocial dominance orientation(SDO) scale was composed to measure acceptance of and desire for group-based social hierarchy,[5]which was assessed through two factors: support for group-based dominance and generalized opposition to equality, regardless of the ingroup's position in the power structure.[6]
The theory was initially proposed in 1992 by social psychology researchersJim Sidanius, Erik Devereux, andFelicia Pratto.[7]It observes that human social groups consist of distinctly different group-based social hierarchies in societies that are capable of producing economic surpluses. These hierarchies have a trimorphic (three-form) structure, a description which was simplified from the four-partbiosocialstructure identified byvan den Berghe(1978).[8]The hierarchies are based on: age (i.e., adults have more power and higher status than children), gender (i.e., men have more power and higher status than women), and arbitrary-set, which are group-based hierarchies that are culturally defined and do not necessarily exist in all societies. Such arbitrariness can select on ethnicity (e.g., in theUS,Bosnia,Asia,Rwanda), class, cast, religion (SunniversusShia Islam), nationality, or any othersocially constructedcategory.[9][10][11]Social hierarchy is not only seen as a universal human feature – SDT argues there is substantial evidence it is shared, including the theorized trimorphic structure – amongapesand otherprimates.[12][13]
Social dominance theory (SDT) argues that all human societies form group-based hierarchies. A social hierarchy is where some individuals receive greater prestige, power or wealth than others. A group-based hierarchy is distinct from an individual-based hierarchy in that the former is based on a socially constructed group such as race, ethnicity, religion, social class and freedoms, linguistic group, etc. while the latter is based on inherited, athletic or leadership ability, high intelligence, artistic abilities, etc.[14]
A primary assumption in social dominance theory (SDT) is thatracism,sexism,nationalism, andclassismare all manifestations of the same human disposition to form group-based social hierarchies.[15]The social tiers described by multiple intersectionaltheories of stratificationbecome organized into hierarchies due to forces that SDT believes are best explained inevolutionary psychologyto offer high survival value.[16]Human social hierarchies are seen to consist of ahegemonic groupat the top and negative reference groups at the bottom.[17]More powerful social roles are increasingly likely to be occupied by a hegemonic group member (for example, an older white male). Males are more dominant than females, and they possess more political power and occupy higher status positions illustrating the iron law ofandrocracy.[18]As a role gets more powerful,Putnam’s law of increasing disproportion[19]becomes applicable and the probability the role is occupied by a hegemonic group member increases.[20][21]
SDT adds new theoretical elements attempting a comprehensive synthesis of explanations of the three mechanisms of group hierarchy oppression[16]that are regulated by legitimizing myths:[3][22]
Although the nature of these hierarchical differences and inequality differs across cultures and societies, significant commonalities have been verified empirically using the social dominance orientation (SDO) scale. In multiple studies across countries, the SDO scale has been shown to correlate robustly with a variety of group prejudices (includingsexism,sexual orientation prejudice, racism, nationalism) and with hierarchy-enhancing policies.[24]
SDT believes that decisions and behaviors of individuals and groups can be better understood by examining the “myths” that guide and motivate them. Legitimizing myths are consensually held values, attitudes, beliefs,stereotypes,conspiracy theories,[25]and cultural ideologies. Examples include theinalienable rights of man,divine right of kings, theprotestant work ethic, andnational myths.[22][26]In current society, such legitimizing myths or narratives are communicated through platforms like social media, television shows, and films, and are investigated using a variety of methods includingcontent analysis,semiotics,discourse analysis, andpsychoanalysis.[27]The granularity of narrative extends from broad ideologies at the highest level to middle level personal myths (positive thinkingof oneself as a successful smart dominant, or submissive inferior[28]), reaching the lowest level of behavioral scripts orschemasfor particular dominant-submissive social situations.[29]Categories of myth include:
For regulation of the three mechanisms of group hierarchy oppression, there are two functional types of legitimizing myths: hierarchy-enhancing and hierarchy-attenuating myths. Hierarchy-enhancing ideologies (e.g., racism ormeritocracy) contribute to greater levels of group-based inequality.Felicia Prattopresented meritocracy as an example of a legitimizing myth, and how themyth of meritocracyproduces only an illusion offairness.[31]Hierarchy-attenuating ideologies such asprotected rights,universalism,egalitarianism,feminism, andmulticulturalismcontribute to greater levels of group-based equality.[32]People endorse these different forms of ideologies based in part on their psychological orientation to accept or reject unequal group relations as measured by the SDO scale. People who score higher on the SDO scale tend to endorse hierarchy-enhancing ideologies, and people who score lower tend to endorse hierarchy-attenuating ideologies.[33]Finally, SDT proposes that the relative counterbalance of hierarchy-enhancing and -attenuating social forces stabilizes group-based inequality.[34]
Authoritarian personalitytheory has an empirical scale known as theRWAmeasure, which strongly predicts a substantially similar set of group level sociopolitical behaviors such as prejudice and ethnocentrism that the SDO scale predicts, despite the scales being largely independent of each other.[35][36]Research byBob Altemeyerand others has shown the two scales have different patterns of correlation with characteristics at the individual level and other social phenomena. For example, high-SDO individuals are not particularly religious, but high-RWAs usually are; high-SDOs do not claim to be benevolent but high RWAs usually do.[37]Altemeyer theorizes that both are authoritarian personality measures, with SDO measuring dominant authorial personalities, and RWA measuring the submissive type.[36]Other researchers believe that the debate between intergroup relation theories has moved past which theory can subsume all others or better explain all forms discrimination. Instead, the debate has moved topluralistexplanation, where researchers need to determine which theory or combination of theories is appropriate under which conditions.[38]
The relationship between the two theories has been explored by Altemeyer and other researchers such as John Duckitt, who have exploited the greater coverage possible by employing RWA and SDO scales in tandem. Duckitt proposes a model in which RWA and SDO influences ingroup andoutgroupattitudes in two different dimensions: RWA measures the threats to norms and values, so high RWA scores reliably predicts negative views towards drug dealers and rock stars, while high SDO scores do not. The model theorizes that high SDO individuals react to pecking order competition with groups seen as socially subordinate (unemployment beneficiaries, housewives, handicapped), and view them negatively, whereas RWA does not show any correlation.[39]Duckitt's research observed that RWA and SDO measures can become more correlated with age, and suggests the hypothesis that the perspectives were acquired independently during socialization and over time become more consistent as they interact with each other.[40]Unaffectionate socialization is hypothesized to cause tough-minded attitudes of high-SDO individuals. Duckitt believes this competitive response dimension in believes the world operates on asurvival of the fittestscheme is backed by multiple studies.[41]He predicts that the high correlation between the views of the world as dangerous and competitive emerge fromparenting stylestending tocovariancealong the dimensions of punitiveness and lack of affection.
The model also suggests that these views mutually reinforce each other.[citation needed]Duckitt examined the complexities of the interaction between RWA, SDO, and a variety of specific ideological/prejudicial beliefs and behavior. For instance:
Duckitt also argued that this model may explain anti-authoritarian-libertarian and egalitarian-altruistic ideologies.
Other researchers view RWA and SDO as distinct. People high on the RWA scale are easily frightened and value security, but are not necessarily callous, cruel, and confident as those that score high on the SDO scale.[37][43]Altemeyer has conducted multiple studies, which suggest that the SDO measure is more predictive of racist orientation than the RWA measure,[44]and that while results from the two scales correlate closely for some countries (Belgium and Germany), his research and McFarland and Adelson's show they correlate very little for others (USA and Canada).[24][45]
Because patriarchal societies are dominated bymalesoverfemales, SDT predicts that everything else being equal, males tend to have a higher SDO score. This “invariance hypothesis” predicts that males will tend to function as hierarchy enforcers; that is, they are more likely to carry out acts ofdiscrimination, such as the systematic terror bypolice officers, and the extreme example ofdeath squadsandconcentration camps.[7][46]The hypothesis is supported by a demonstrated correlation between SDO scores and preference for occupations such as criminal prosecutors and police officers, as opposed to hierarchy-attenuating professions (social workers, human rights advocates, or health care workers).[47]SDT also predicts that males who carry out violent acts have been predisposed out of a conditioning called prepared learning.[48]
SDT was influenced by theelite theoriesofKarl Marx,Gaetano Mosca,Robert Michels, andVilfredo Pareto, all of whom argue that societies are ruled by a small elite who rationalize their power through some system of justifying narratives and ideologies.[49]Marx described the oppressive hierarchy of hegemonic groups dominating negative reference groups; in his examples thebourgeoisie(owning class) dominate theproletariat(working class) by controllingcapital(the means of production) and not paying workers enough. However, Marx thought that the working class would eventually comprehend the solution to this oppression and destroy the bourgeoisie in aproletarian revolution.Friedrich Engelsviewed ideology and social discourse as employed to keep dominants and subgroups in line, referring to this as "false consciousness", whose politicalrationalistcure results when masses can evaluate the facts of their situation. SDT believes that social constructions employing ideology and social narratives may be used as effective justifications regardless of whether they are epistemologically true or false, or whether they legitimize inequality or equality. From the Marxianeconomic deterministperspective, race, ethnic, and gender conflict are sociologicalepiphenomenaderivable from the primary economic class conflict. Unlike Marxian sociologists, SDT along with Mosca, Michels, and Pareto together rejectreductionismsolely to economic causes, and are skeptical of the hoped for class revolution. Pareto's analysis was that “victory” in the class struggle would only usher in a new set of socially dominant elites. Departing from elite theory's near exclusive focus on social structures manipulated by rational actors, SDT follows Pareto's new direction towards examining collective psychological forces, asserting that human behavior is not primarily driven by either reason or logic.[50]
John C. Turner and Katherine J. Reynolds from theAustralian National Universitypublished in theBritish Journal of Social Psychologya commentary on SDT, which outlined six fundamental criticisms based on internal inconsistencies: arguing against the evolutionary basis of the social dominance drive, questioning the origins of social conflict (hardwired versus social structure), questioning the meaning and role of the SDO construct, a falsification of behavioral asymmetry, the idea of an alternative to understanding attitudes to power including ideological asymmetry and collective self-interest, and a reductionism and philosophical idealism of SDT.[51]The commentary argues thatsocial identity theory(SIT) has better explanatory power than SDT, and made the case that SDT has been falsified by two studies: Schmitt, Branscombe, and Kappen (2003) and Wilson and Liu (2003).[52]
Wilson and Liu suggested intergroup attitudes follow social structure and cultural beliefs, theories, and ideologies developed to make sense of group's place in the social structure and the nature of their relationships with other groups; from this view, SDO is a product rather than a cause of social life.[52]They questioned the invariance hypothesis, and cited their own test relating "strength of gender identification" as a moderator of "gender‐social dominance orientation relationship", reporting that group identification was associated with increased dominance orientation in males but decreased dominance orientation in females. Pratto, Sidanius and Levin denied that any claim was made that SDO measures are independent of social identity context, and that methodologically, “it would obviously make no sense to compare the SDO levels of female members of death squads to those of male social workers, or, less dramatically, to compare the SDO levels of men identifying with female gender roles to those of women identifying with male gender roles”.[53]The hypothesized evolutionary predispositions of one gender towards SDO was not intended by the SDT authors to imply that nothing can be done about gender inequality or domination patterns, and that the theory provides unique approaches for attenuating those predispositions and their social manifestations.[54]
Lui and Wilson (2003), conducted research to examine the role of gender in comparison to levels of social dominance orientation. The study conducted two tests looking at the relationship between gender-social dominate orientation and if it's moderated by strength of gender group identification and found, "strength of gender identification was found to moderate the gender‐SDO relationship, such that increasing group identification was associated with increasing SDO scores for males, and decreasing SDO for females." Therefore this study raised questions about gender as group membership and if it's a different status compared to other group memberships, possibly undermining the theoretical basis of SDT.[52]
|
https://en.wikipedia.org/wiki/Social_dominance_theory
|
Congestion games(CG) are a class of games ingame theory. They represent situations which commonly occur in roads,communication networks,oligopolymarkets andnatural habitats. There is a set of resources (e.g. roads or communication links); there are several players who need resources (e.g. drivers or network users); each player chooses a subset of these resources (e.g. a path in the network); the delay in each resource is determined by the number of players choosing a subset that contains this resource. The cost of each player is the sum of delays among all resources he chooses. Naturally, each player wants to minimize his own delay; however, each player's choices impose a negativeexternalityon the other players, which may lead to inefficient outcomes.
The research of congestion games was initiated by the American economistRobert W. Rosenthalin 1973.[1]He proved that every congestion game has a Nash equilibrium inpure strategies(akapure Nash equilibrium, PNE). During the proof, he in fact proved that every congestion game is anexact potential game. Later, Monderer and Shapley[2]proved a converse result: any game with an exact potential function is equivalent to some congestion game. Later research focused on questions such as:
Consider a traffic net where two players originate at pointOand need to get to pointT. Suppose that nodeOis connected to nodeTvia two paths:O-A-TandO-B-T, whereAis a little closer thanB(i.e.Ais more likely to be chosen by each player), as in the picture at the right.
The roads from both connection points toTget easily congested, meaning the more players pass through a point, the greater the delay of each player becomes, so having both players go through the same connection point causes extra delay. Formally, the delay in each ofATandBTwhenxplayers go there isx2{\displaystyle x^{2}}.
A good outcome in this game will be for the two players to "coordinate" and pass through different connection points. Can such an outcome be achieved?
The following matrix expresses the costs of the players in terms of delays depending on their choices:
The pureNash equilibriain this game are (OAT,OBT) and (OBT,OAT): any unilateral change by one of the players increases the cost of this player (note that the values in the table are costs, so players prefer them to be smaller). In this example, the Nash equilibrium isefficient- the players choose different lanes and the sum of costs is minimal.
In contrast, suppose the delay in each ofATandBTwhenxplayers go there is0.8x{\displaystyle 0.8x}. Then the cost matrix is:
Now, the only pure Nash equilibrium is(OAT,OAT){\displaystyle (OAT,OAT)}: any player switching to OBT increases his cost from 2.6 to 2.8. An equilibrium still exists, but it is not efficient: the sum of costs is 5.2, while the sum of cost in(OAT,OBT){\displaystyle (OAT,OBT)}and(OBT,OAT){\displaystyle (OBT,OAT)}is 4.6.
The basic definition of a CG has the following components.
Every CG has aNash equilibriuminpure strategies. This can be shown by constructing apotential functionthat assigns a value to each outcome.[1]Moreover, this construction will also show that iteratedbest responsefinds a Nash equilibrium. DefineΦ=∑e∈E∑k=1xede(k){\displaystyle \textstyle \Phi =\sum _{e\in E}\sum _{k=1}^{x_{e}}d_{e}(k)}. Note that this function isnotthe social welfare∑e∈Exede(xe){\displaystyle \textstyle \sum _{e\in E}x_{e}d_{e}(x_{e})}, but rather a discrete integral of sorts. The critical property of a potential function for a congestion game is that if one player switches strategy, the change in his delay is equal to the change in the potential function.
Consider the case when playeriswitches fromPi{\displaystyle P_{i}}toQi{\displaystyle Q_{i}}. Elements that are in both of the strategies
remain unaffected, elements that the player leaves (i.e.e∈Pi−Qi{\displaystyle e\in P_{i}-Q_{i}}) decrease the potential byde(xe){\displaystyle d_{e}(x_{e})}, and the elements the player joins (i.e.e∈Qi−Pi{\displaystyle e\in Q_{i}-P_{i}}) increase the potential byde(xe+1){\displaystyle d_{e}(x_{e}+1)}. This change in potential is precisely the change in delay for playeri, soΦ{\displaystyle \Phi }is in fact a potential function.
Now observe that any minimum ofΦ{\displaystyle \Phi }is a pure Nash equilibrium. Fixing all but one player, any improvement in strategy by that player corresponds to decreasingΦ{\displaystyle \Phi }, which cannot happen at a minimum. Now since there are a finite number of configurations and eachde{\displaystyle d_{e}}is monotone, there exists an equilibrium.
The existence of a potential function has an additional implication, called thefinite improvement property (FIP). If we start with any strategy-vector, pick a player arbitrarily, and let him change his strategy to a better strategy for him, and repeat, then the sequence of improvements must be finite (that is, the sequence will not cycle). This is because each such improvement strictly increases the potential.
Below we present various extensions and variations on the basic CG model.
Anonatomic(akacontinuous)CGis the limit of a standard CG withnplayers, asn→∞{\displaystyle n\rightarrow \infty }. There is a continuum of players, the players are considered "infinitesimally small", and each individual player has a negligible effect on the congestion. Nonatomic CGs were studied by Milchtaich,[3]Friedman[4]and Blonsky.[5][6]
Strategies are now collections of strategy profilesfP{\displaystyle f_{P}}. For a strategy setSi{\displaystyle S_{i}}of sizen, the collection of all valid profiles is acompact subsetof[0,ri]n{\displaystyle [0,r_{i}]^{n}}. We now define the potential function asΦ=∑e∈E∫0xede(z)dz{\displaystyle \textstyle \Phi =\sum _{e\in E}\int _{0}^{x_{e}}d_{e}(z)\,dz}, replacing the discrete integral with the standard one.
As a function of the strategy,Φ{\displaystyle \Phi }is continuous:de{\displaystyle d_{e}}is continuous by assumption, andxe{\displaystyle x_{e}}is a continuous function of the strategy. Then by theextreme value theorem,Φ{\displaystyle \Phi }attains its global minimum.
The final step is to show that a minimum ofΦ{\displaystyle \Phi }is indeed a Nash equilibrium. Assume for contradiction that there exists a collection offP{\displaystyle f_{P}}that minimizeΦ{\displaystyle \Phi }but are not a Nash equilibrium. Then for some typei, there exists some improvementQ∈Si{\displaystyle Q\in S_{i}}over the current choiceP. That is,∑e∈Qde(xe)<∑e∈Pde(xe){\displaystyle \textstyle \sum _{e\in Q}d_{e}(x_{e})<\sum _{e\in P}d_{e}(x_{e})}. The idea now is to take a small amountδ<fP{\displaystyle \delta <f_{P}}of players using strategyPand move them to strategyQ. Now for anyxe∈Q{\displaystyle x_{e}\in Q}, we have increased its load byδ{\displaystyle \delta }, so its term inΦ{\displaystyle \Phi }is now∫0xe+δde(z)dz{\displaystyle \textstyle \int _{0}^{x_{e}+\delta }d_{e}(z)dz}. Differentiating the integral, this change is approximatelyδ⋅de(xe){\displaystyle \delta \cdot d_{e}(x_{e})}, with errorδ2{\displaystyle \delta ^{2}}. The equivalent analysis of the change holds when we look at edges inP.
Therefore, the change in potential is approximatelyδ(∑e∈Qde(xe)−∑e∈Pde(xe)){\displaystyle \textstyle \delta (\sum _{e\in Q}d_{e}(x_{e})-\sum _{e\in P}d_{e}(x_{e}))}, which is less than zero. This is a contradiction, as thenΦ{\displaystyle \Phi }was not minimized. Therefore, a minimum ofΦ{\displaystyle \Phi }must be a Nash equilibrium.
In asplittable CG,as in an atomic CG, there are finitely many players, each of whom has a certain load to transfer. As in nonatomic CGs, each player can split his load into fractional loads going through different paths, like a transportation company choosing a set of paths for mass transportation. In contrast to nonatomic CGs, each player has a non-negligible effect on the congestion.
Splittable CGs were first analyzed by Ariel Orda,Raphael Romand Nachum Shimkin in 1993, in the context of communication networks.[7][8]They show that, for a simple network with two nodes and multiple parallel links, the Nash equilibrium is unique under reasonable convexity conditions, and has some interesting monotonicity properties. For general network topologies, more complex conditions are required to guarantee the uniqueness of Nash equilibrium.
In aweightedCG, different players may have different effects on the congestion. For example, in a road network, atruckadds congestion much more than amotorcycle. In general, the weight of a player may depend on the resource (resource-specific weights): for every playeriand resourcee, there is weightwi,e{\displaystyle w_{i,e}}, and the load on the resourceeisxe=∑i:e∈Piwi,e{\displaystyle x_{e}=\sum _{i:e\in P_{i}}w_{i,e}}. An important special case is when the weight depends only on the player (resource-independent weights), that is, each player i has a weightwi{\displaystyle w_{i}}, andxe=∑i:e∈Piwi{\displaystyle x_{e}=\sum _{i:e\in P_{i}}w_{i}}.
Milchtaich[9]considered the special case of weighted CGs in which each strategy is a single resource ("singleton CG"), the weights areresource-independent, and all players have the same strategy set. The following is proved:
Milchtaich considered the special case of weighted CGs in which each strategy is a path in a given undirected graph ("network CG"). He proved that every finite game can be represented as a weighted network congestion game, with nondecreasing (but not necessarily negative) cost-functions.[10]This implies that not every such game has a PNE. Concrete examples of weighted CGs without PNE are given by Libman and Orda,[11]as well as Goemans Mirrokni and Vetta.[12]This raises the question of what conditions guarantee the existence of PNE.[13]
In particular, we say that a certain graphG guaranteesa certain property if every weighted network CG in which the underlying network isGhas that property. Milchtaich[14]characterized networks that guarantee the existence of PNE, as well as the finite-improvement property, with the additional condition that a player with a lower weight has weakly more allowed strategies (formally,wi<wj{\displaystyle w_{i}<w_{j}}implies|Si|≥|Sj|{\displaystyle |S_{i}|\geq |S_{j}|}). He proved that:
In the special case in which every player is allowed to use any strategy ("public edges"), there are more networks that guarantee the existence of PNE; a complete characterization of such networks is posed as an open problem.[14]
Mlichtaich[15]analyzes the effect of network topology on theefficiencyof PNE:
Milchtaich[16]analyzes the effect of network topology on theuniquenessof the PNE costs:
Holzman and Law-Yone[17]also characterize the networks that guarantee that every atomic CG has astrong PNE, a unique PNE, or aPareto-efficientPNE.
Richman and Shimkin[18]characterize the networks that guarantee that everysplittableCG has a unique PNE.
We say that a classCof functionsguaranteesa certain property if every weighted CG in which all delay functions are elements ofChas that property.
There are many other papers about weighted congestion games.[23][24][25]
The basic CG model can be extended by allowing the delay function of each resource to depend on the player. So for each resourceeand playeri, there is a delay functiondi,e{\displaystyle d_{i,e}}. Given a strategyPi{\displaystyle P_{i}}, playeriexperiences delay∑e∈Pidi,e(xe){\displaystyle \textstyle \sum _{e\in P_{i}}d_{i,e}(x_{e})}.
Milchtaich[9]introduced and studiedCGs with player-specific costsin the following special case:
This special case of CG is also called acrowding game.[26][27]It represents a setting in which several people simultaneously choose a place to go to (e.g. a room, a settlement, a restaurant), and their payoff is determined both by the place and by the number of other players choosing the same place.
In a crowding game, given a strategyPi={e}{\displaystyle P_{i}=\{e\}}, playeriexperiences delaydi,e(xe){\displaystyle d_{i,e}(x_{e})}. If the player switches to a different strategyf, his delay would bedi,f(xf+1){\displaystyle d_{i,f}(x_{f}+1)}; hence, a strategy vector is a PNE iff for every player i,di,e(xe)≤di,f(xf+1){\displaystyle d_{i,e}(x_{e})\leq d_{i,f}(x_{f}+1)}for alle,f.
In general, CGs with player-specific delays mightnotadmit apotentialfunction. For example, suppose there are three resources x,y,z and two players A and B with the following delay functions:
The following is a cyclic improvement path:(z,y)→(y,y)→(y,z)→(x,z)→(x,x)→(z,x)→(z,y){\displaystyle (z,y)\to (y,y)\to (y,z)\to (x,z)\to (x,x)\to (z,x)\to (z,y)}. This shows that the finite-improvement property does not hold, so the game cannot have a potential function (not even a generalized-ordinal-potential function). However:
When there are three or more players, even best-response paths might be cyclic. However, every CG still has a PNE.[9]: Thm.2The proof is constructive and shows an algorithm that finds a Nash equilibrium in at most(n+12){\displaystyle {n+1 \choose 2}}steps. Moreover, every CG isweakly acyclic: for any initial strategy-vector, at least one best-response path starting at this vector has a length of at mostr(n+12){\displaystyle r{n+1 \choose 2}}, which terminates at an equilibrium.[9]: Thm.3
Every crowding game issequentially solvable.[26]This means that, for every ordering of the players, thesequential gamein which each player in turn picks a strategy has asubgame-perfect equilibriumin which the players' actions are a PNE in the original simultaneous game. Every crowding game has at least onestrong PNE;[28]every strong PNE of a crowding game can be attained as a subgame-perfect equilibrium of a sequential version of the game.[26]
In general, a crowding game might have many different PNE. For example, suppose there arenplayers andnresources, and the negative effect of congestion on the payoff is much higher than the positive value of the resources. Then there are n! different PNEs: every one-to-one matching of players to resources is a PNE, as no player would move to a resource occupied by another player. However, if a crowding game is replicatedmtimes, then the set of PNEs converges to a single point asmgoes to infinity. Moreover, in a "large" (nonatomic) crowding game, there is generically a unique PNE. This PNE has an interesting graph-theoretic property. LetGbe abipartite graphwith players on one side and resources on the other side, where each player is adjacent to all the resources that his copies choose in the unique PNE. Then G contains no cycles.[27]
A special case of the player-specific delay functions is that the delay functions can be separated into a player-specific factor and a general factor. There are two sub-cases:
When only pure-strategies are considered, these two notions are equivalent, since the logarithm of a product is a sum. Moreover, when players may have resource-specific weights, the setting with resource-specific delay functions can be reduced to the setting with a universal delay function. Games with separable cost functions occur in load-balancing,[30]M/M/1 queueing,[31]andhabitat selection.[32]The following is known about weighted singleton CGs with separable costs:[33]
Every weightedsingletonCG with separable player-specific preferences is isomorphic to a weightednetworkCG with player-independent preference.[33][2]
Milchtaich considered the special case of CGs with player-specific costs, in which each strategy is a path in a given graph ("network CG"). He proved that every finite game can be represented as an (unweighted) network congestion game with player-specific costs, with nondecreasing (but not necessarily negative) cost-functions.[10]A complete characterization of networks that guarantee the existence of PNE in such CGs is posed as an open problem.[14]
The proof of existence of PNE is constructive: it shows a finite algorithm (an improvement path) that always finds a PNE. This raises the question of how many steps are required to find this PNE? Fabrikant,Papadimitriouand Talwar[38]proved:
Even-Dar, Kesselman and Mansour[30]analyze the number of steps required for convergence to equilibrium in a load-balancing setting.
Caragiannis, Fanelli, Gravin and Skopalik[39]present an algorithm that computes a constant-factor approximation PNE. In particular:
Their algorithm identifies a short sequence of best-response moves, that leads to an approximate equilibrium. They also show that, for more general CGs, attaining any polynomial approximation of PNE is PLS-complete.
Fotakis, Kontogiannis and Spirakis[19]present an algorithm that, in any weighted network CG with linear delay functions, finds a PNE inpseudo-polynomial time(polynomial in the number of playersnand the sum of players' weightsW). Their algorithm is agreedybest-responsealgorithm: players enter the game in descending order of their weight, and choose a best-response to existing players' strategies.
Panagopoulou and Spirakis[20]show empirical evidence that the algorithm of Fotakis, Kontogiannis and Spirakis in fact runs in time polynomial innand logW. They also propose an initial strategy-vector that dramatically speeds this algorithm.
In general, a weighted network CG may not have a PNE. Milchtaich[14]proves that deciding whether a given weighted network CG has a PNE is NP-hard even in the following cases:
The proof is by reduction from the directed edge-disjoint paths problem.[40]
Caragiannis, Fanelli, Gravin and Skopalik[41]present an algorithm that computes a constant-factor approximation PNE in weighted CGs. In particular:
To prove their results, they show that, although weighted CGs may not have a potential function, every weighted CG can beapproximatedby a certain potential game. This lets them show that every weighted CG has a (d!)-approximate PNE. Their algorithm identifies a short sequence of best-response moves, that leads to such an approximate PNE.
In summary, CGs can be classified according to various parameters:
|
https://en.wikipedia.org/wiki/Congestion_game
|
Anetwork partitionis a division of a computer network into relatively independentsubnets, either by design, to optimize them separately, or due to the failure of network devices. Distributed software must be designed to be partition-tolerant, that is, even after the network is partitioned, it still works correctly.
For example, in a network with multiple subnets where nodes A and B are located in one subnet and nodes C and D are in another, a partition occurs if thenetwork switchdevice between the two subnets fails. In that case nodes A and B can no longer communicate with nodes C and D, but all nodes A-D work the same as before.
TheCAP theoremis based on three trade-offs:consistency,availability, and partition tolerance. Partition tolerance, in this context, means the ability of a data processing system to continue processing data even if a network partition causes communication errors between subsystems.[1]
|
https://en.wikipedia.org/wiki/Network_partition
|
TheSeven Bridges of Königsbergis a historically notable problem in mathematics. Itsnegative resolutionbyLeonhard Euler, in 1736,[1]laid the foundations ofgraph theoryand prefigured the idea oftopology.[2]
The city ofKönigsberginPrussia(nowKaliningrad,Russia) was set on both sides of thePregel River, and included two large islands—KneiphofandLomse—which were connected to each other, and to the two mainland portions of the city, by seven bridges. The problem was to devise a walk through the city that would cross each of those bridges once and only once.
By way of specifying the logical task unambiguously, solutions involving either
are explicitly unacceptable.
Euler proved that the problem has no solution. The difficulty he faced was thedevelopment of a suitable technique of analysis, and of subsequent tests that established this assertion with mathematical rigor.
Euler first pointed out that the choice of route inside each land mass is irrelevant and that the only important feature of a route is the sequence of bridges crossed. This allowed him to reformulate the problem in abstract terms (laying the foundations ofgraph theory), eliminating all features except the list of land masses and the bridges connecting them. In modern terms, one replaces each land masses with an abstract "vertex" or node, and each bridge with an abstract connection, an "edge", which only serves to record which pair of vertices (land masses) is connected by that bridge. The resulting mathematical structure is agraph.
→→
Since only the connection information is relevant, the shape of pictorial representations of a graph may be distorted in any way, without changing the graph itself. Only the number of edges (possibly zero) between each pair of nodes is significant. It does not, for instance, matter whether the edges drawn are straight or curved, or whether one node is to the left or right of another.
Next, Euler observed that (except at the endpoints of the walk), whenever one enters a vertex by a bridge, one leaves the vertex by a bridge. In other words, during any walk in the graph, the number of times one enters a non-terminal vertex equals the number of times one leaves it. Now, if every bridge has been traversed exactly once, it follows that, for each land mass (except for the ones chosen for the start and finish), the number of bridges touching that land mass must beeven(half of them, in the particular traversal, will be traversed "toward" the landmass; the other half, "away" from it). However, all four of the land masses in the original problem are touched by anoddnumber of bridges (one is touched by 5 bridges, and each of the other three is touched by 3). Since, at most, two land masses can serve as the endpoints of a walk, the proposition of a walk traversing each bridge once leads to a contradiction.
In modern language, Euler shows that the possibility of a walk through a graph, traversing each edge exactly once, depends on thedegreesof the nodes. The degree of a node is the number of edges touching it. Euler's argument shows that a necessary condition for the walk of the desired form is that the graph beconnectedand have exactly zero or two nodes of odd degree. This condition turns out also to be sufficient—a result stated by Euler and later proved byCarl Hierholzer. Such a walk is now called anEulerian trailorEuler walkin his honor. Further, if there are nodes of odd degree, then any Eulerian path will start at one of them and end at the other. Since the graph corresponding to historical Königsberg has four nodes of odd degree, it cannot have an Eulerian path.
An alternative form of the problem asks for a path that traverses all bridges and also has the same starting and ending point. Such a walk is called anEulerian circuitor anEuler tour. Such a circuit exists if, and only if, the graph is connected and all nodes have even degree. All Eulerian circuits are also Eulerian paths, but not all Eulerian paths are Eulerian circuits.
Euler's work was presented to the St. Petersburg Academy on 26 August 1735, and published asSolutio problematis ad geometriam situs pertinentis(The solution of a problem relating to the geometry of position) in the journalCommentarii academiae scientiarum Petropolitanaein 1741.[3]It is available in English translation inThe World of MathematicsbyJames R. Newman.
In thehistory of mathematics, Euler's solution of the Königsberg bridge problem is considered to be the first theorem ofgraph theoryand the first true proof in thenetwork theory,[4]a subject now generally regarded as a branch ofcombinatorics. Combinatorial problems of other types such as theenumerationofpermutationsandcombinationshad been considered since antiquity.
Euler's recognition that the key information was the number of bridges and the list of their endpoints (rather than their exact positions) presaged the development oftopology. The difference between the actual layout and the graph schematic is a good example of the idea that topology is not concerned with the rigid shape of objects.
Hence, as Euler recognized, the "geometry of position" is not about "measurements and calculations" but about something more general. That called in question the traditionalAristotelianview that mathematics is the "science ofquantity". Though that view fits arithmetic and Euclidean geometry, it did not fit topology and the more abstract structural features studied in modern mathematics.[5]
Philosophers have noted that Euler's proof is not about an abstraction or a model of reality, but directly about the real arrangement of bridges. Hence the certainty of mathematical proof can apply directly to reality.[6]The proof is also explanatory, giving insight into why the result must be true.[7]
Two of the seven original bridges did not survive thebombing of Königsberg in World War II. Two others were later demolished and replaced by a highway. The three other bridges remain, although only two of them are from Euler's time (one was rebuilt in 1935).[8]These changes leave five bridges existing at the same sites that were involved in Euler's problem. In terms of graph theory, two of the nodes now have degree 2, and the other two have degree 3. Therefore, an Eulerian path is now possible, but it must begin on one island and end on the other.[9]
TheUniversity of CanterburyinChristchurchhas incorporated a model of the bridges into a grass area between the old Physical Sciences Library and the Erskine Building, housing the Departments of Mathematics, Statistics and Computer Science.[10]The rivers are replaced with short bushes and the central island sports a stonetōrō.Rochester Institute of Technologyhas incorporated the puzzle into the pavement in front of theGene Polisseni Center, an ice hockey arena that opened in 2014,[11]and theGeorgia Institute of Technologyalso installed a landscape art model of the seven bridges in 2018.[12]
A popular variant of the puzzle is theBristol Bridges Walk.[13]Like historical Königsberg,Bristoloccupies two river banks and two river islands.[14]However, the configuration of the 45 major bridges in Bristol is such that an Eulerian circuit exists.[15]This cycle has been popularized by a book[15]and news coverage[16][17]and has featured in different charity events.[18]
|
https://en.wikipedia.org/wiki/Seven_Bridges_of_K%C3%B6nigsberg
|
A method for pruning dense networks to highlight key links
Relationships among a set of elements are often represented as a square matrix with entries representing the relations between all pairs of the elements. Relations such as distances, dissimilarities, similarities, relatedness, correlations, co-occurrences, conditional probabilities, etc., can be represented by such matrices. Such data can also be represented as networks with weighted links between the elements. Such matrices and networks are extremely dense and are not easily apprehended without some form ofdata reductionor pruning.
Apathfinder networkresults from applying a pruning method that removes weaker links from a (usually dense) network according to the lengths of alternative paths (see below).[1][2][3]It is used as apsychometricscaling method based ongraph theoryand used in the study of expertise, education,[4]knowledge acquisition,mental models,[5]andknowledge engineering. It is also employed in generating communication networks,[6]software debugging,[7]visualizing scientificcitationpatterns,[8]information retrieval, and other forms ofdata visualization.[9]Pathfinder networks are potentially applicable to any problem addressed bynetwork theory.
Network pruning aims to highlight the more important links between elements represented in a network. It helps to simplify the collection of connections involved which is valuable in data visualization and in comprehending essential relations among the elements represented in the network.
Several psychometric scaling methods start from pairwise data and yield structures revealing the underlying organization of the data.Data clusteringandmultidimensional scalingare two such methods. Network scaling represents another method based ongraph theory. Pathfinder networks are derived from matrices of data for pairs of entities. Because the algorithm uses distances, similarity data are inverted to yield dissimilarities for the computations.
In the pathfinder network, the entities correspond to the nodes of the generated network, and the links in the network are determined by the patterns of proximities. For example, if the proximities are similarities, links will generally connect nodes of high similarity. When proximities are distances or dissimilarities, links will connect the shorter distances. The links in the network will be undirected if the proximities are symmetrical for every pair of entities. Symmetrical proximities mean that the order of the entities is not important, so the proximity ofiandjis the same as the proximity ofjandifor all pairsi,j. If the proximities are not symmetrical for every pair, the links will be directed.
The pathfinder algorithm uses two parameters.
Path distancedp{\displaystyle d_{p}}is computed as:dp=(∑i=1klir)1/r{\displaystyle d_{p}=(\sum _{i=1}^{k}l_{i}^{r})^{1/r}}, whereli{\displaystyle l_{i}}is the distance of theith{\displaystyle ith}link in the path and2≤k≤q{\displaystyle 2\leq k\leq q}. Forr=1{\displaystyle r=1},dp{\displaystyle d_{p}}is simply the sum of the distances of the links in the path. Forr=∞{\displaystyle r=\infty },dp{\displaystyle d_{p}}is the maximum of the distances of the links in the path becauselimr→∞dp=maxi=1kli{\displaystyle \lim _{r\rightarrow \infty }d_{p}=\max _{i=1}^{k}l_{i}}. A link is pruned if its distance is greater than the minimum distance of paths between the nodes connected by the link. Efficient methods for finding minimum distances include theFloyd–Warshall algorithm(forq=n−1{\displaystyle q=n-1}) andDijkstra's algorithm(for any value ofq{\displaystyle q}).
A network generated with particular values ofq{\displaystyle q}andr{\displaystyle r}is called aPFNet(q,r){\displaystyle PFNet(q,r)}. Both of the parameters have the effect of decreasing the number of links in the network as their values are increased. The network with the minimum number of links is obtained whenq=n−1{\displaystyle q=n-1}andr=∞{\displaystyle r=\infty }, i.e.,PFNet(n−1,∞){\displaystyle PFNet(n-1,\infty )}.
With ordinal-scale data (seelevel of measurement), ther{\displaystyle r}parameter should be∞{\displaystyle \infty }because the samePFNet{\displaystyle PFNet}would result from any positivemonotonic transformationof the proximity data. Other values ofr{\displaystyle r}require data measured on a ratio scale. Theq{\displaystyle q}parameter can be varied to yield the desired number of links in the network or to focus on more local relations with smaller values ofq{\displaystyle q}.
Essentially, pathfinder networks preserve the shortest possible paths given the data. Therefore, links are eliminated when they are not on shortest paths. ThePFNet(n−1,∞){\displaystyle PFNet(n-1,\infty )}will be theminimum spanning treefor the links defined by the proximity data if a unique minimum spanning tree exists. In general, thePFNet(n−1,∞){\displaystyle PFNet(n-1,\infty )}includes all of the links in any minimum spanning tree.
Here is an example of an undirected pathfinder network derived from average ratings of a group of biology graduate students. The students rated the relatedness of all pairs of the terms shown, and the mean rating for each pair was computed. The solid blue links are thePFNet(n−1,∞){\displaystyle PFNet(n-1,\infty )}(labeled "both" in the figure). The dotted red links are added in thePFNet(2,∞){\displaystyle PFNet(2,\infty )}. For the added links, there are no 2-link paths shorter than the link distance but there is at least one shorter path with more than two links in the data. A minimal spanning tree would have 24 links so the 26 links inPFNet(n−1,∞){\displaystyle PFNet(n-1,\infty )}implies that there is more than one minimum spanning tree. There are two cycles present so there are tied distances in the set of links in the cycle. Breaking each cycle would require removing one of the tied links in each cycle.
Further information on pathfinder networks and several examples of the application of PFnets to a variety of problems can be found in the references.
Three papers describing fast implementations of pathfinder networks:
(The two variants by Quirin et al. are significantly faster. While the former can be applied withq=2{\displaystyle q=2}orq=n−1{\displaystyle q=n-1}and any value forr{\displaystyle r}, the latter can only be applied in cases whereq=n−1{\displaystyle q=n-1}andr=∞{\displaystyle r=\infty }.)
|
https://en.wikipedia.org/wiki/Pathfinder_network
|
Ahuman disease networkis a network of human disorders anddiseaseswith reference to their genetic origins or other features. More specifically, it is the map of human disease associations referring mostly to diseasegenes. For example, in a human disease network, two diseases are linked if they share at least one associated gene. A typical human disease network usually derives from bipartite networks which consist of both diseases and genes information. Additionally, some human disease networks use other features such assymptomsand proteins to associate diseases.
In 2007, Goh et al. constructed a disease-genebipartite graphusing information fromOMIMdatabase and termed human disease network.[2]In 2009, Barrenas et al. derived complex disease-gene network using GWAs (Genome Wide Association studies).[3]In the same year, Hidalgo et al. published a novel way of building human phenotypic disease networks in which diseases were connected according to their calculated distance.[4]In 2011, Cusick et al. summarized studies on genotype-phenotype associations in cellular context.[5]In 2014, Zhou, et al. built a symptom-based human disease network by mining biomedical literature database.[6]
A large-scale human disease network shows scale-free property. Thedegree distributionfollows apower lawsuggesting that only a few diseases connect to a large number of diseases, whereas most diseases have few links to others.
Such network also shows a clustering tendency by disease classes.[2][7]
In a symptom-based disease network, disease are also clustered according to their categories. Moreover, diseases sharing the same symptom are more likely to share the same genes andprotein interactions.[6]
|
https://en.wikipedia.org/wiki/Human_disease_network
|
Abiological networkis a method of representing systems as complex sets of binary interactions or relations between various biological entities.[1]In general, networks or graphs are used to capture relationships between entities or objects.[1]A typicalgraphingrepresentation consists of a set ofnodesconnected byedges.
As early as 1736Leonhard Euleranalyzed a real-world issue known as theSeven Bridges of Königsberg, which established the foundation ofgraph theory. From the 1930s-1950s the study ofrandom graphswere developed. During the mid 1990s, it was discovered that many different types of "real" networks have structural properties quite different from random networks.[2]In the late 2000's, scale-free and small-world networks began shaping the emergence of systems biology, network biology, and network medicine.[3]In 2014, graph theoretical methods were used by Frank Emmert-Streib to analyze biological networks.[4]
In the 1980s, researchers started viewingDNAorgenomesas the dynamic storage of a language system with precise computable finitestatesrepresented as afinite-state machine.[5]Recentcomplex systemsresearch has also suggested some far-reaching commonality in the organization of information in problems from biology,computer science, andphysics.
Protein-protein interaction networks(PINs) represent the physical relationship among proteins present in a cell, where proteins arenodes, and their interactions are undirectededges.[6]Due to their undirected nature, it is difficult to identify all the proteins involved in an interaction.Protein–protein interactions(PPIs) are essential to the cellular processes and also the most intensely analyzed networks in biology. PPIs could be discovered by various experimental techniques, among which theyeast two-hybrid systemis a commonly used technique for the study of binary interactions.[7]Recently, high-throughput studies using mass spectrometry have identified large sets of protein interactions.[8]
Many international efforts have resulted in databases that catalog experimentally determined protein-protein interactions. Some of them are theHuman Protein Reference Database,Database of Interacting Proteins, the Molecular Interaction Database (MINT),[9]IntAct,[10]andBioGRID.[11]At the same time, multiple computational approaches have been proposed to predict interactions.[12]FunCoup andSTRINGare examples of such databases, where protein-protein interactions inferred from multiple evidences are gathered and made available for public usage.[citation needed]
Recent studies have indicated the conservation of molecular networks through deep evolutionary time.[13]Moreover, it has been discovered that proteins with high degrees of connectedness are more likely to be essential for survival than proteins with lesser degrees.[14]This observation suggests that the overall composition of the network (not simply interactions between protein pairs) is vital for an organism's overall functioning.
Thegenomeencodes thousands of genes whose products (mRNAs, proteins) are crucial to the various processes of life, such as cell differentiation, cell survival, and metabolism. Genes produce such products through a process called transcription, which is regulated by a class of proteins calledtranscription factors. For instance, the human genome encodes almost 1,500 DNA-binding transcription factors that regulate the expression of more than 20,000 human genes.[15]The complete set of gene products and the interactions among them constitutesgene regulatory networks(GRN). GRNs regulate the levels of gene products within the cell and in-turn the cellular processes.
GRNs are represented with genes and transcriptional factors as nodes and the relationship between them as edges. These edges are directional, representing the regulatory relationship between the two ends of the edge. For example, the directed edge from gene A to gene B indicates that A regulates the expression of B. Thus, these directional edges can not only represent the promotion of gene regulation but also its inhibition.
GRNs are usually constructed by utilizing the gene regulation knowledge available from databases such as.,ReactomeandKEGG. High-throughput measurement technologies, such asmicroarray,RNA-Seq,ChIP-chip, andChIP-seq, enabled the accumulation of large-scale transcriptomics data, which could help in understanding the complex gene regulation patterns.[16][17]
Gene co-expression networks can be perceived as association networks between variables that measure transcript abundances. These networks have been used to provide a system biologic analysis of DNA microarray data, RNA-seq data, miRNA data, etc.weighted gene co-expression network analysisis extensively used to identify co-expression modules and intramodular hub genes.[18]Co-expression modules may correspond to cell types or pathways, while highly connected intramodular hubs can be interpreted as representatives of their respective modules.
Cells break down the food and nutrients into small molecules necessary for cellular processing through a series of biochemical reactions. These biochemical reactions are catalyzed byenzymes. The complete set of all these biochemical reactions in all the pathways represents themetabolic network. Within the metabolic network, the small molecules take the roles of nodes, and they could be either carbohydrates, lipids, or amino acids. The reactions which convert these small molecules from one form to another are represented as edges. It is possible to use network analyses to infer how selection acts on metabolic pathways.[19]
Signals are transduced within cells or in between cells and thus form complex signaling networks which plays a key role in the tissue structure. For instance, theMAPK/ERK pathwayis transduced from the cell surface to the cell nucleus by a series of protein-protein interactions, phosphorylation reactions, and other events.[20]Signaling networks typically integrateprotein–protein interaction networks,gene regulatory networks, andmetabolic networks.[21][22]Single cell sequencing technologies allows the extraction of inter-cellular signaling, an example is NicheNet, which allows to modeling intercellular communication by linking ligands to target genes.[23]
The complex interactions in thebrainmake it a perfect candidate to apply network theory.Neuronsin the brain are deeply connected with one another, and this results in complex networks being present in the structural and functional aspects of the brain.[24]For instance,small-world networkproperties have been demonstrated in connections between cortical regions of the primate brain[25]or during swallowing in humans.[26]This suggests that cortical areas of the brain are not directly interacting with each other, but most areas can be reached from all others through only a few interactions.
All organisms are connected through feeding interactions. If a species eats or is eaten by another species, they are connected in an intricatefood webof predator and prey interactions. The stability of these interactions has been a long-standing question in ecology.[27]That is to say if certain individuals are removed, what happens to the network (i.e., does it collapse or adapt)? Network analysis can be used to explore food web stability and determine if certain network properties result in more stable networks. Moreover, network analysis can be used to determine how selective removals of species will influence the food web as a whole.[28]This is especially important considering the potential species loss due to global climate change.
In biology, pairwise interactions have historically been the focus of intense study. With the recent advances innetwork science, it has become possible to scale up pairwise interactions to include individuals of many species involved in many sets of interactions to understand the structure and function of largerecological networks.[29]The use ofnetwork analysiscan allow for both the discovery and understanding of how these complex interactions link together within the system's network, a property that has previously been overlooked. This powerful tool allows for the study of various types of interactions (fromcompetitivetocooperative) using the same general framework.[30]For example, plant-pollinatorinteractions are mutually beneficial and often involve many different species of pollinators as well as many different species of plants. These interactions are critical to plant reproduction and thus the accumulation of resources at the base of thefood chainfor primary consumers, yet these interaction networks are threatened byanthropogenicchange. The use of network analysis can illuminate howpollination networkswork and may, in turn, inform conservation efforts.[31]Within pollination networks, nestedness (i.e., specialists interact with a subset of species that generalists interact with), redundancy (i.e., most plants are pollinated by many pollinators), andmodularityplay a large role in network stability.[31][32]These network properties may actually work to slow the spread of disturbance effects through the system and potentially buffer the pollination network from anthropogenic changes somewhat.[32]More generally, the structure of species interactions within an ecological network can tell us something about the diversity, richness, and robustness of the network.[33]Researchers can even compare current constructions of species interactions networks with historical reconstructions of ancient networks to determine how networks have changed over time.[34]Much research into these complex species interactions networks is highly concerned with understanding what factors (e.g., species richness, connectance, nature of the physical environment) lead to network stability.[35]
Network analysis provides the ability to quantify associations between individuals, which makes it possible to infer details about the network as a whole at the species and/or population level.[36]One of the most attractive features of the network paradigm would be that it provides a single conceptual framework in which the social organization of animals at all levels (individual, dyad, group, population) and for all types of interaction (aggressive, cooperative, sexual, etc.) can be studied.[37]
Researchers interested inethologyacross many taxa, from insects to primates, are starting to incorporate network analysis into their research. Researchers interested in social insects (e.g., ants and bees) have used network analyses better to understand the division of labor, task allocation, and foraging optimization within colonies.[38][39][40]Other researchers are interested in how specific network properties at the group and/or population level can explain individual-level behaviors. Studies have demonstrated how animal social network structure can be influenced by factors ranging from characteristics of the environment to characteristics of the individual, such as developmental experience and personality. At the level of the individual, the patterning of social connections can be an important determinant offitness, predicting both survival and reproductive success. At the population level, network structure can influence the patterning of ecological and evolutionary processes, such asfrequency-dependent selectionand disease and information transmission.[41]For instance, a study onwire-tailed manakins(a small passerine bird) found that a male'sdegreein the network largely predicted the ability of the male to rise in the social hierarchy (i.e., eventually obtain a territory and matings).[42]Inbottlenose dolphingroups, an individual's degree andbetweenness centralityvalues may predict whether or not that individual will exhibit certain behaviors, like the use of side flopping and upside-down lobtailing to lead group traveling efforts; individuals with high betweenness values are more connected and can obtain more information, and thus are better suited to lead group travel and therefore tend to exhibit these signaling behaviors more than other group members.[43]
Social network analysiscan also be used to describe the social organization within a species more generally, which frequently reveals important proximate mechanisms promoting the use of certain behavioral strategies. These descriptions are frequently linked to ecological properties (e.g., resource distribution). For example, network analyses revealed subtle differences in the group dynamics of two related equidfission-fusionspecies,Grevy's zebraandonagers, living in variable environments; Grevy's zebras show distinct preferences in their association choices when they fission into smaller groups, whereas onagers do not.[44]Similarly, researchers interested in primates have also utilized network analyses to compare social organizations across the diverseprimateorder, suggesting that using network measures (such ascentrality,assortativity,modularity, and betweenness) may be useful in terms of explaining the types of social behaviors we see within certain groups and not others.[45]
Finally, social network analysis can also reveal important fluctuations in animal behaviors across changing environments. For example, network analyses in femalechacma baboons(Papio hamadryas ursinus) revealed important dynamic changes across seasons that were previously unknown; instead of creating stable, long-lasting social bonds with friends, baboons were found to exhibit more variable relationships which were dependent on short-term contingencies related to group-level dynamics as well as environmental variability.[46]Changes in an individual's social network environment can also influence characteristics such as 'personality': for example, social spiders that huddle with bolder neighbors tend to increase also in boldness.[47]This is a very small set of broad examples of how researchers can use network analysis to study animal behavior. Research in this area is currently expanding very rapidly, especially since the broader development of animal-borne tags andcomputer visioncan be used to automate the collection of social associations.[48]Social network analysis is a valuable tool for studying animal behavior across all animal species and has the potential to uncover new information about animal behavior and social ecology that was previously poorly understood.
Within a nucleus,DNAis constantly in motion. Perpetual actions such as genome folding and Cohesin extrusion morph the shape of a genome in real time. The spatial location of strands ofchromatinrelative to each other plays an important role in the activation or suppression of certain genes. DNA-DNA Chromatin Networks help biologists to understand these interactions by analyzing commonalities amongst differentloci. The size of a network can vary significantly, from a few genes to several thousand and thus network analysis can provide vital support in understanding relationships among different areas of the genome. As an example, analysis of spatially similar loci within the organization in a nucleus withGenome Architecture Mapping (GAM)can be used to construct a network of loci with edges representing highly linked genomic regions.
The first graphic showcases the Hist1 region of the mm9 mouse genome with each node representing genomic loci. Two nodes are connected by an edge if their linkage disequilibrium is greater than the average across all 81 genomic windows. The locations of the nodes within the graphic are randomly selected and the methodology of choosing edges yields a, simple to show, but rudimentary graphical representation of the relationships in the dataset. The second visual exemplifies the same information as the previous; However, the network starts with every loci placed sequentially in a ring configuration. It then pulls nodes together using linear interpolation by their linkage as a percentage. The figure illustrates strong connections between the center genomic windows as well as the edge loci at the beginning and end of the Hist1 region.
To draw useful information from a biological network, an understanding of the statistical and mathematical techniques of identifying relationships within a network is vital. Procedures to identify association, communities, and centrality within nodes in a biological network can provide insight into the relationships of whatever the nodes represent whether they are genes, species, etc. Formulation of these methods transcends disciplines and relies heavily ongraph theory,computer science, andbioinformatics.
There are many different ways to measure the relationships of nodes when analyzing a network. In many cases, the measure used to find nodes that share similarity within a network is specific to the application it is being used. One of the types of measures that biologists utilize iscorrelationwhich specifically centers around the linear relationship between two variables.[49]As an example,weighted gene co-expression network analysisusesPearson correlationto analyze linked gene expression and understand genetics at a systems level.[50]Another measure of correlation islinkage disequilibrium. Linkage disequilibrium describes the non-random association of genetic sequences among loci in a given chromosome.[51]An example of its use is in detecting relationships inGAMdata across genomic intervals based upon detection frequencies of certain loci.[52]
The concept ofcentralitycan be extremely useful when analyzing biological network structures. There are many different methods to measure centrality such as betweenness, degree, Eigenvector, and Katz centrality. Every type of centrality technique can provide different insights on nodes in a particular network; However, they all share the commonality that they are to measure the prominence of a node in a network.[53]In 2005, Researchers atHarvard Medical Schoolutilized centrality measures with the yeast protein interaction network. They found that proteins that exhibited high Betweenness centrality were more essential and translated closely to a given protein's evolutionary age.[54]
Studying thecommunity structureof a network by subdividing groups of nodes into like-regions can be an integral tool for bioinformatics when exploring data as a network.[55]A food web of TheSecaucus High SchoolMarsh exemplifies the benefits of grouping as the relationships between nodes are far easier to analyze with well-made communities. While the first graphic is hard to visualize, the second provides a better view of the pockets of highly connected feeding relationships that would be expected in a food web. The problem of community detection is still an active problem. Scientists and graph theorists continuously discover new ways of subsectioning networks and thus a plethora of differentalgorithmsexist for creating these relationships.[56]Like many other tools that biologists utilize to understand data with network models, every algorithm can provide its own unique insight and may vary widely on aspects such as accuracy ortime complexityof calculation.
In 2002, a food web of marine mammals in theChesapeake Baywas divided into communities by biologists using a community detection algorithm based on neighbors of nodes with high degree centrality. The resulting communities displayed a sizable split in pelagic and benthic organisms.[57]Two very common community detection algorithms for biological networks are the Louvain Method and Leiden Algorithm.
TheLouvain methodis agreedy algorithmthat attempts to maximizemodularity, which favors heavy edges within communities and sparse edges between, within a set of nodes. The algorithm starts by each node being in its own community and iteratively being added to the particular node's community that favors a higher modularity.[58][59]Once no modularity increase can occur by joining nodes to a community, a newweighted networkis constructed of communities as nodes with edges representing between-community edges and loops representing edges within a community. The process continues until no increase in modularity occurs.[60]While the Louvain Method provides good community detection, there are a few ways that it is limited. By mainly focusing on maximizing a given measure of modularity, it may be led to craft badly connected communities by degrading a model for the sake of maximizing a modularity metric; However, the Louvain Method performs fairly and is easy to understand compared to many other community detection algorithms.[59]
The Leiden Algorithm expands on the Louvain Method by providing a number of improvements. When joining nodes to a community, only neighborhoods that have been recently changed are considered. This greatly improves the speed of merging nodes. Another optimization is in the refinement phase in which the algorithm randomly chooses for a node from a set of communities to merge with. This allows for greater depth in choosing communities as the Louvain Method solely focuses on maximizing the modularity that was chosen. The Leiden algorithm, while more complex than the Louvain Method, performs faster with better community detection and can be a valuable tool for identifying groups.[59]
Network motifs, or statistically significant recurring interaction patterns within a network, are a commonly used tool to understand biological networks. A major use case of network motifs is inNeurophysiologywhere motif analysis is commonly used to understand interconnected neuronal functions at varying scales.[61]As an example, in 2017, researchers atBeijing Normal Universityanalyzed highly represented 2 and 3 node network motifs in directed functional brain networks constructed byResting state fMRIdata to study the basic mechanisms in brain information flow.[62]
|
https://en.wikipedia.org/wiki/Biological_network
|
Network medicineis the application ofnetwork sciencetowards identifying, preventing, and treating diseases. This field focuses on usingnetwork topologyandnetwork dynamicstowards identifying diseases and developing medical drugs.Biological networks, such asprotein-protein interactionsandmetabolic pathways, are utilized by network medicine.Disease networks, which map relationships between diseases and biological factors, also play an important role in the field.Epidemiologyis extensively studied using network science as well;social networksandtransportation networksare used to model the spreading of disease across populations. Network medicine is a medically focused area ofsystems biology.
The term "network medicine" was introduced byAlbert-László Barabásiin an the article "Network Medicine – From Obesity to the 'Diseasome'", published inThe New England Journal of Medicine, in 2007. Barabási states thatbiological systems, similarly to social and technological systems, contain many components that are connected in complicated relationships but are organized by simple principles. Relaying on the tools and principles ofnetwork theory,[1]the organizing principles can be analyzed by representing systems ascomplex networks, which are collections ofnodeslinked together by a particular biological or molecular relationship. For networks pertaining to medicine, nodes represent biological factors (biomolecules, diseases, phenotypes, etc.) and links (edges) represent their relationships (physical interactions, shared metabolic pathway, shared gene, shared trait, etc.).[2]
Barabasi suggested that understanding human disease requires us to focus on three key networks, themetabolic network, thedisease network, and the social network. The network medicine is based on the idea that understanding complexity ofgene regulation,metabolic reactions, andprotein-protein interactionsand that representing these as complex networks will shed light on the causes and mechanisms of diseases. It is possible, for example, to infer abipartite graphrepresenting the connections of diseases to their associatedgenesusing theOMIMdatabase.[3]The projection of the diseases, called the human disease network (HDN), is a network of diseases connected to each other if they share a common gene. Using the HDN, diseases can be classified and analyzed through the genetic relationships between them. Network medicine has proven to be a valuable tool in analyzing big biomedical data.[4]
The whole set of molecular interactions in the human cell, also known as theinteractome, can be used for disease identification and prevention.[5]These networks have been technically classified asscale-free,disassortative,small-world networks, having a highbetweenness centrality.[6]
Protein-protein interactionshave been mapped, using proteins asnodesand their interactions between each other as links.[7]These maps utilize databases such asBioGRIDand theHuman Protein Reference Database. Themetabolic networkencompasses the biochemical reactions inmetabolic pathways, connecting twometabolitesif they are in the same pathway.[8]Researchers have used databases such asKEGGto map these networks. Others networks includecell signalingnetworks,gene regulatory networks, andRNAnetworks.
Using interactome networks, one can discover and classify diseases, as well as develop treatments through knowledge of its associations and their role in the networks. One observation is that diseases can be classified not by their principlephenotypes(pathophenotype) but by theirdisease module, which is a neighborhood or group of components in the interactome that, if disrupted, results in a specific pathophenotype.[5]Disease modules can be used in a variety of ways, such as predicting disease genes that have not been discovered yet. Therefore, network medicine looks to identify the diseasemodulefor a specific pathophenotype usingclustering algorithms.
Human disease networks, also called the diseasome, are networks in which the nodes are diseases and the links, the strength of correlation between them. This correlation is commonly quantified based on associated cellular components that two diseases share. The first-published human disease network (HDN) looked at genes, finding that many of the disease associated genes arenon-essential genes, as these are the genes that do not completely disrupt the network and are able to be passed down generations.[3]Metabolic disease networks (MDN), in which two diseases are connected by a sharedmetaboliteormetabolic pathway, have also been extensively studied and is especially relevant in the case ofmetabolic disorders.[9]
Three representations of the diseasome are:[6]
Some disease networks connect diseases to associated factors outside the human cell. Networks of environmental and geneticetiological factorslinked with shared diseases, called the "etiome", can be also used to assess theclusteringofenvironmental factorsin these networks and understand the role of the environment on the interactome.[11]The human symptom-disease network (HSDN), published in June 2014, showed that the symptoms of disease and disease associated cellular components were strongly correlated and that diseases of the same categories tend to form highly connected communities, with respect to their symptoms.[12]
Networkpharmacologyis a developing field based insystems pharmacologythat looks at the effect of drugs on both the interactome and the diseasome.[13]The topology of a biochemical reaction network determines the shape of drugdose-response curve[14]as well as the type of drug-drug interactions,[15]thus can help design efficient and safe therapeutic strategies. In addition, the drug-target network (DTN) can play an important role in understanding the mechanisms of action of approved and experimental drugs.[16]The network theory view ofpharmaceuticalsis based on the effect of the drug in the interactome, especially the region that thedrug targetoccupies.Combination therapyfor a complex disease (polypharmacology) is suggested in this field since oneactive pharmaceutical ingredient(API) aimed at one target may not affect the entire disease module.[13]The concept of disease modules can be used to aid indrug discovery,drug design, and the development ofbiomarkersfor disease detection.[2]There can be a variety of ways to identifying drugs using network pharmacology; a simple example of this is the "guilt by association" method. This states if two diseases are treated by the same drug, a drug that treats one disease may treat the other.[17]Drug repurposing,drug-drug interactionsand drugside-effectshave also been studied in this field.[18][2]The next iteration of network pharmacology used entirely different disease definitions, defined as dysfunction in signaling modules derived from protein-protein interaction modules. The latter as well as the interactome had many conceptual shortcomings, e.g., each protein appears only once in the interactome, whereas in reality, one protein can occur in different contexts and different cellular locations. Such signaling modules are therapeutically best targeted at several sites, which is now the new and clinically applied definition of network pharmacology. To achieve higher than current precision, patients must not be selected solely on descriptive phenotypes but also based on diagnostics that detect the module dysregulation. Moreover, such mechanism-based network pharmacology has the advantage that each of the drugs used within one module is highly synergistic, which allows for reducing the doses of each drug, which then reduces the potential of these drugs acting on other proteins outside the module and hence the chance for unwanted side effects.[19]
Network epidemics has been built by applying network science to existingepidemic models, as manytransportation networksand social networks play a role in the spread of disease.[20]Social networks have been used to assess the role of social ties in the spread ofobesityin populations.[21]Epidemic models and concepts, such asspreadingandcontact tracing, have been adapted to be used in network analysis.[22]These models can be used inpublic healthpolicies, in order to implement strategies such astargeted immunization[23]and has been recently used to model the spread of theEbola virus epidemic in West Africaacross countries and continents.[24][25]
Recently, some researchers tended to represent medication use in form of networks. The nodes in these networks represent medications and the edges represent some sort of relationship between these medications. Cavallo et al. (2013)[26]described the topology of a co-prescription network to demonstrate which drug classes are most co-prescribed. Bazzoni et al. (2015)[27]concluded that the DPNs of co-prescribed medications are dense, highly clustered, modular and assortative. Askar et al. (2021)[28]created a network of the severe drug-drug interactions (DDIs) showing that it consisted of many clusters.
The development of organs[29]and other biological systems can be modelled as network structures where the clinical (e.g., radiographic, functional) characteristics can be represented as nodes and the relationships between these characteristics are represented as the links
among such nodes.[30]Therefore, it is possible to use networks to model how organ systems dynamically interact.
The Channing Division of Network Medicine atBrigham and Women's Hospitalwas created in 2012 to study, reclassify, and develop treatments forcomplex diseasesusing network science andsystems biology.[31]It currently involves more than 80Harvard Medical School(HMS) faculty and focuses on three areas:
Massachusetts Institute of Technologyoffers an undergraduate course called "Network Medicine: Using Systems Biology and Signaling Networks to Create Novel Cancer Therapeutics".[33]Also,HarvardCatalyst (The Harvard Clinical and Translational Science Center) offers a three-day course entitled "Introduction to Network Medicine", open to clinical and science professionals with doctorate degrees.[34]
Current worldwide efforts in network medicine are coordinated by theNetwork Medicine Institute and Global Alliance, representing 33 leading universities and institutions around the world committed to improving global health.
|
https://en.wikipedia.org/wiki/Network_medicine
|
In mathematics, agraph partitionis the reduction of agraphto a smaller graph bypartitioningits set of nodes into mutually exclusive groups. Edges of the original graph that cross between the groups will produce edges in the partitioned graph. If the number of resulting edges is small compared to the original graph, then the partitioned graph may be better suited for analysis and problem-solving than the original. Finding a partition that simplifies graph analysis is a hard problem, but one that has applications to scientific computing,VLSIcircuit design, and task scheduling in multiprocessor computers, among others.[1]Recently, the graph partition problem has gained importance due to its application for clustering and detection of cliques in social, pathological and biological networks. For a survey on recent trends in computational methods and applications seeBuluc et al. (2013).[2]Two common examples of graph partitioning areminimum cutandmaximum cutproblems.
Typically, graph partition problems fall under the category ofNP-hardproblems. Solutions to these problems are generally derived using heuristics and approximation algorithms.[3]However, uniform graph partitioning or a balanced graph partition problem can be shown to beNP-completeto approximate within any finite factor.[1]Even for special graph classes such as trees and grids, no reasonable approximation algorithms exist,[4]unlessP=NP. Grids are a particularly interesting case since they model the graphs resulting fromFinite Element Model (FEM)simulations. When not only the number of edges between the components is approximated, but also the sizes of the components, it can be shown that no reasonable fully polynomial algorithms exist for these graphs.[4]
Consider a graphG= (V,E), whereVdenotes the set ofnvertices andEthe set of edges. For a (k,v) balanced partition problem, the objective is to partitionGintokcomponents of at most sizev· (n/k), while minimizing the capacity of the edges between separate components.[1]Also, givenGand an integerk> 1, partitionVintokparts (subsets)V1,V2, ...,Vksuch that the parts are disjoint and have equal size, and the number of edges with endpoints in different parts is minimized. Such partition problems have been discussed in literature as bicriteria-approximation or resource augmentation approaches. A common extension is tohypergraphs, where an edge can connect more than two vertices. A hyperedge is not cut if all vertices are in one partition, and cut exactly once otherwise, no matter how many vertices are on each side. This usage is common inelectronic design automation.
For a specific (k, 1 +ε) balanced partition problem, we seek to find a minimum cost partition ofGintokcomponents with each component containing a maximum of (1 +ε)·(n/k) nodes. We compare the cost of this approximation algorithm to the cost of a (k,1) cut, wherein each of thekcomponents must have the same size of (n/k) nodes each, thus being a more restricted problem. Thus,
We already know that (2,1) cut is the minimum bisection problem and it is NP-complete.[5]Next, we assess a 3-partition problem whereinn= 3k, which is also bounded in polynomial time.[1]Now, if we assume that we have a finite approximation algorithm for (k, 1)-balanced partition, then, either the 3-partition instance can be solved using the balanced (k,1) partition inGor it cannot be solved. If the 3-partition instance can be solved, then (k, 1)-balanced partitioning problem inGcan be solved without cutting any edge. Otherwise, if the 3-partition instance cannot be solved, the optimum (k, 1)-balanced partitioning inGwill cut at least one edge. An approximation algorithm with a finite approximation factor has to differentiate between these two cases. Hence, it can solve the 3-partition problem which is a contradiction under the assumption thatP=NP. Thus, it is evident that (k,1)-balanced partitioning problem has no polynomial-time approximation algorithm with a finite approximation factor unlessP=NP.[1]
Theplanar separator theoremstates that anyn-vertexplanar graphcan be partitioned into roughly equal parts by the removal of O(√n) vertices. This is not a partition in the sense described above, because the partition set consists of vertices rather than edges. However, the same result also implies that every planar graph of bounded degree has a balanced cut with O(√n) edges.
Since graph partitioning is a hard problem, practical solutions are based on heuristics. There are two broad categories of methods, local and global. Well-known local methods are theKernighan–Lin algorithm, andFiduccia-Mattheyses algorithms, which were the first effective 2-way cuts by local search strategies. Their major drawback is the arbitrary initial partitioning of the vertex set, which can affect the final solution quality. Global approaches rely on properties of the entire graph and do not rely on an arbitrary initial partition. The most common example is spectral partitioning, where a partition is derived from approximate eigenvectors of the adjacency matrix, orspectral clusteringthat groups graph vertices using theeigendecompositionof thegraph Laplacianmatrix.
A multi-level graph partitioning algorithm works by applying one or more stages. Each stage reduces the size of
the graph by collapsing vertices and edges, partitions the smaller graph, then maps back and refines this partition of the original graph.[6]A wide variety of partitioning and refinement methods can be applied within the overall multi-level scheme. In many cases, this approach can give both fast execution times and very high quality results.
One widely used example of such an approach isMETIS,[7]a graph partitioner, and hMETIS, the corresponding partitioner for hypergraphs.[8]An alternative approach originated from[9]and implemented, e.g., inscikit-learnisspectral clusteringwith the partitioning determined fromeigenvectorsof thegraph Laplacianmatrix for the original graph computed byLOBPCGsolver withmultigridpreconditioning.
Given a graphG=(V,E){\displaystyle G=(V,E)}withadjacency matrixA{\displaystyle A}, where an entryAij{\displaystyle A_{ij}}implies an edge between nodei{\displaystyle i}andj{\displaystyle j}, anddegree matrixD{\displaystyle D}, which is a diagonal matrix, where each diagonal entry of a rowi{\displaystyle i},dii{\displaystyle d_{ii}}, represents the node degree of nodei{\displaystyle i}. TheLaplacian matrixL{\displaystyle L}is defined asL=D−A{\displaystyle L=D-A}. Now, a ratio-cut partition for graphG=(V,E){\displaystyle G=(V,E)}is defined as a partition ofV{\displaystyle V}into disjointU{\displaystyle U}, andW{\displaystyle W}, minimizing the ratio
of the number of edges that actually cross this cut to the number of pairs of vertices that could support such edges. Spectral graph partitioning can be motivated[10]by analogy with partitioning of a vibrating string or a mass-spring system and similarly extended to the case of negative weights of the graph.[11]
In such a scenario, thesecond smallest eigenvalue(λ2{\displaystyle \lambda _{2}}) ofL{\displaystyle L}, yields alower boundon the optimal cost (c{\displaystyle c}) of ratio-cut partition withc≥λ2n{\displaystyle c\geq {\frac {\lambda _{2}}{n}}}. The eigenvector (V2{\displaystyle V_{2}}) corresponding toλ2{\displaystyle \lambda _{2}}, called theFiedler vector, bisects the graph into only two communities based on thesignof the corresponding vector entry. Division into a larger number of communities can be achieved by repeatedbisectionor by usingmultiple eigenvectorscorresponding to the smallest eigenvalues.[12]The examples in Figures 1,2 illustrate the spectral bisection approach.
Minimum cut partitioning however fails when the number of communities to be partitioned, or the partition sizes are unknown. For instance, optimizing the cut size for free group sizes puts all vertices in the same community. Additionally, cut size may be the wrong thing to minimize since a good division is not just one with small number of edges between communities. This motivated the use ofModularity(Q)[13]as a metric to optimize a balanced graph partition. The example in Figure 3 illustrates 2 instances of the same graph such that in(a)modularity (Q) is the partitioning metric and in(b), ratio-cut is the partitioning metric.
Another objective function used for graph partitioning isConductancewhich is the ratio between the number of cut edges and the volume of the smallest part. Conductance is related to electrical flows and random walks. TheCheeger boundguarantees that spectral bisection provides partitions with nearly optimal conductance. The quality of this approximation depends on the second smallest eigenvalue of the Laplacian λ2.
Graph partition can be useful for identifying the minimal set of nodes or links that should be immunized in order to stop epidemics.[14]
Spin models have been used for clustering of multivariate data wherein similarities are translated into coupling strengths.[15]The properties of ground state spin configuration can be directly interpreted as communities. Thus, a graph is partitioned to minimize the Hamiltonian of the partitioned graph. TheHamiltonian(H) is derived by assigning the following partition rewards and penalties.
Additionally, Kernel-PCA-based Spectral clustering takes a form of least squares Support Vector Machine framework, and hence it becomes possible to project the data entries to a kernel induced feature space that has maximal variance, thus implying a high separation between the projected communities.[16]
Some methods express graph partitioning as a multi-criteria optimization problem which can be solved using local methods expressed in a game theoretic framework where each node makes a decision on the partition it chooses.[17]
For very large-scale distributed graphs classical partition methods might not apply (e.g.,spectral partitioning, Metis[7]) since they require full access to graph data in order to perform global operations. For such large-scale scenarios distributed graph partitioning is used to perform partitioning through asynchronous local operations only.
scikit-learnimplementsspectral clusteringwith the partitioning determined fromeigenvectorsof thegraph Laplacianmatrix for the original graph computed byARPACK, or byLOBPCGsolver withmultigridpreconditioning.[9]
METIS[7]is a graph partitioning family by Karypis and Kumar. Among this family, kMetis aims at greater partitioning speed, hMetis,[8]applies to hypergraphs and aims at partition quality, and ParMetis[7]is a parallel implementation of the Metis graph partitioning algorithm.
KaHyPar[18][19][20]is a multilevel hypergraph partitioning framework providing direct k-way and recursive bisection based partitioning algorithms. It instantiates the multilevel approach in its most extreme version, removing only a single vertex in every level of the hierarchy. By using this very fine grainedn-level approach combined with strong local search heuristics, it computes solutions of very high quality.
Scotch[21]is graph partitioning framework by Pellegrini. It uses recursive multilevel bisection and includes sequential as well as parallel partitioning techniques.
Jostle[22]is a sequential and parallel graph partitioning solver developed by Chris Walshaw.
The commercialized version of this partitioner is known as NetWorks.
Party[23]implements the Bubble/shape-optimized framework and the Helpful Sets algorithm.
The software packages DibaP[24]and its MPI-parallel variant PDibaP[25]by Meyerhenke implement the Bubble framework using diffusion; DibaP also uses AMG-based techniques for coarsening and solving linear systems arising in the diffusive approach.
Sanders and Schulz released a graph partitioning package KaHIP[26](Karlsruhe High Quality Partitioning) that implements for example flow-based methods, more-localized local searches and several parallel and sequential meta-heuristics.
The tools Parkway[27]by Trifunovic and
Knottenbelt as well as Zoltan[28]by Devine et al. focus on hypergraph
partitioning.
|
https://en.wikipedia.org/wiki/Graph_partition
|
Inmathematics, atopological spaceis, roughly speaking, ageometrical spacein whichclosenessis defined but cannot necessarily be measured by a numericdistance. More specifically, a topological space is asetwhose elements are calledpoints, along with an additional structure called a topology, which can be defined as a set ofneighbourhoodsfor each point that satisfy someaxiomsformalizing the concept of closeness. There are several equivalent definitions of a topology, the most commonly used of which is the definition throughopen sets, which is easier than the others to manipulate.
A topological space is the most general type of amathematical spacethat allows for the definition oflimits,continuity, andconnectedness.[1][2]Common types of topological spaces includeEuclidean spaces,metric spacesandmanifolds.
Although very general, the concept of topological spaces is fundamental, and used in virtually every branch of modern mathematics. The study of topological spaces in their own right is calledgeneral topology(or point-set topology).
Around 1735,Leonhard Eulerdiscovered theformulaV−E+F=2{\displaystyle V-E+F=2}relating the number of vertices (V), edges (E) and faces (F) of aconvex polyhedron, and hence of aplanar graph. The study and generalization of this formula, specifically byCauchy(1789–1857) andL'Huilier(1750–1840),boosted the studyof topology. In 1827,Carl Friedrich GausspublishedGeneral investigations of curved surfaces, which in section 3 defines the curved surface in a similar manner to the modern topological understanding: "A curved surface is said to possess continuous curvature at one of its points A, if the direction of all the straight lines drawn from A to points of the surface at an infinitesimal distance from A are deflected infinitesimally from one and the same plane passing through A."[3][non-primary source needed]
Yet, "untilRiemann's work in the early 1850s, surfaces were always dealt with from a local point of view (as parametric surfaces) and topological issues were never considered".[4]"MöbiusandJordanseem to be the first to realize that the main problem about the topology of (compact) surfaces is to find invariants (preferably numerical) to decide the equivalence of surfaces, that is, to decide whether two surfaces arehomeomorphicor not."[4]
The subject is clearly defined byFelix Kleinin his "Erlangen Program" (1872): the geometry invariants of arbitrary continuous transformation, a kind of geometry. The term "topology" was introduced byJohann Benedict Listingin 1847, although he had used the term in correspondence some years earlier instead of previously used "Analysis situs". The foundation of this science, for a space of any dimension, was created byHenri Poincaré. His first article on this topic appeared in 1894.[5]In the 1930s,James Waddell Alexander IIandHassler Whitneyfirst expressed the idea that a surface is a topological space that islocally like a Euclidean plane.
Topological spaces were first defined byFelix Hausdorffin 1914 in his seminal "Principles of Set Theory".Metric spaceshad been defined earlier in 1906 byMaurice Fréchet, though it was Hausdorff who popularised the term "metric space" (German:metrischer Raum).[6][7][better source needed]
The utility of the concept of atopologyis shown by the fact that there are several equivalent definitions of thismathematical structure. Thus one chooses theaxiomatizationsuited for the application. The most commonly used is that in terms ofopen sets, but perhaps more intuitive is that in terms ofneighbourhoodsand so this is given first.
This axiomatization is due toFelix Hausdorff.
LetX{\displaystyle X}be a (possibly empty) set. The elements ofX{\displaystyle X}are usually calledpoints, though they can be any mathematical object. LetN{\displaystyle {\mathcal {N}}}be afunctionassigning to eachx{\displaystyle x}(point) inX{\displaystyle X}a non-empty collectionN(x){\displaystyle {\mathcal {N}}(x)}of subsets ofX.{\displaystyle X.}The elements ofN(x){\displaystyle {\mathcal {N}}(x)}will be calledneighbourhoodsofx{\displaystyle x}with respect toN{\displaystyle {\mathcal {N}}}(or, simply,neighbourhoods ofx{\displaystyle x}). The functionN{\displaystyle {\mathcal {N}}}is called aneighbourhood topologyif theaxiomsbelow[8]are satisfied; and thenX{\displaystyle X}withN{\displaystyle {\mathcal {N}}}is called atopological space.
The first three axioms for neighbourhoods have a clear meaning. The fourth axiom has a very important use in the structure of the theory, that of linking together the neighbourhoods of different points ofX.{\displaystyle X.}
A standard example of such a system of neighbourhoods is for the real lineR,{\displaystyle \mathbb {R} ,}where a subsetN{\displaystyle N}ofR{\displaystyle \mathbb {R} }is defined to be aneighbourhoodof a real numberx{\displaystyle x}if it includes an open interval containingx.{\displaystyle x.}
Given such a structure, a subsetU{\displaystyle U}ofX{\displaystyle X}is defined to beopenifU{\displaystyle U}is a neighbourhood of all points inU.{\displaystyle U.}The open sets then satisfy the axioms given below in the next definition of a topological space. Conversely, when given the open sets of a topological space, the neighbourhoods satisfying the above axioms can be recovered by definingN{\displaystyle N}to be a neighbourhood ofx{\displaystyle x}ifN{\displaystyle N}includes an open setU{\displaystyle U}such thatx∈U.{\displaystyle x\in U.}[9]
Atopologyon asetXmay be defined as a collectionτ{\displaystyle \tau }ofsubsetsofX, calledopen setsand satisfying the following axioms:[10]
As this definition of a topology is the most commonly used, the setτ{\displaystyle \tau }of the open sets is commonly called atopologyonX.{\displaystyle X.}
A subsetC⊆X{\displaystyle C\subseteq X}is said to beclosedin(X,τ){\displaystyle (X,\tau )}if itscomplementX∖C{\displaystyle X\setminus C}is an open set.
Usingde Morgan's laws, the above axioms defining open sets become axioms definingclosed sets:
Using these axioms, another way to define a topological space is as a setX{\displaystyle X}together with a collectionτ{\displaystyle \tau }of closed subsets ofX.{\displaystyle X.}Thus the sets in the topologyτ{\displaystyle \tau }are the closed sets, and their complements inX{\displaystyle X}are the open sets.
There are many other equivalent ways to define a topological space: in other words the concepts of neighbourhood, or that of open or closed sets can be reconstructed from other starting points and satisfy the correct axioms.
Another way to define a topological space is by using theKuratowski closure axioms, which define the closed sets as thefixed pointsof anoperatoron thepower setofX.{\displaystyle X.}
Anetis a generalisation of the concept ofsequence. A topology is completely determined if for every net inX{\displaystyle X}the set of itsaccumulation pointsis specified.
Many topologies can be defined on a set to form a topological space. When every open set of a topologyτ1{\displaystyle \tau _{1}}is also open for a topologyτ2,{\displaystyle \tau _{2},}one says thatτ2{\displaystyle \tau _{2}}isfinerthanτ1,{\displaystyle \tau _{1},}andτ1{\displaystyle \tau _{1}}iscoarserthanτ2.{\displaystyle \tau _{2}.}A proof that relies only on the existence of certain open sets will also hold for any finer topology, and similarly a proof that relies only on certain sets not being open applies to any coarser topology. The termslargerandsmallerare sometimes used in place of finer and coarser, respectively. The termsstrongerandweakerare also used in the literature, but with little agreement on the meaning, so one should always be sure of an author's convention when reading.
The collection of all topologies on a given fixed setX{\displaystyle X}forms acomplete lattice: ifF={τα:α∈A}{\displaystyle F=\left\{\tau _{\alpha }:\alpha \in A\right\}}is a collection of topologies onX,{\displaystyle X,}then themeetofF{\displaystyle F}is the intersection ofF,{\displaystyle F,}and thejoinofF{\displaystyle F}is the meet of the collection of all topologies onX{\displaystyle X}that contain every member ofF.{\displaystyle F.}
Afunctionf:X→Y{\displaystyle f:X\to Y}between topological spaces is calledcontinuousif for everyx∈X{\displaystyle x\in X}and every neighbourhoodN{\displaystyle N}off(x){\displaystyle f(x)}there is a neighbourhoodM{\displaystyle M}ofx{\displaystyle x}such thatf(M)⊆N.{\displaystyle f(M)\subseteq N.}This relates easily to the usual definition in analysis. Equivalently,f{\displaystyle f}is continuous if theinverse imageof every open set is open.[11]This is an attempt to capture the intuition that there are no "jumps" or "separations" in the function. Ahomeomorphismis abijectionthat is continuous and whoseinverseis also continuous. Two spaces are calledhomeomorphicif there exists a homeomorphism between them. From the standpoint of topology, homeomorphic spaces are essentially identical.[12]
Incategory theory, one of the fundamentalcategoriesisTop, which denotes thecategory of topological spaceswhoseobjectsare topological spaces and whosemorphismsare continuous functions. The attempt to classify the objects of this category (up tohomeomorphism) byinvariantshas motivated areas of research, such ashomotopy theory,homology theory, andK-theory.
A given set may have many different topologies. If a set is given a different topology, it is viewed as a different topological space. Any set can be given thediscrete topologyin which every subset is open. The only convergent sequences or nets in this topology are those that are eventually constant. Also, any set can be given thetrivial topology(also called the indiscrete topology), in which only the empty set and the whole space are open. Every sequence and net in this topology converges to every point of the space. This example shows that in general topological spaces, limits of sequences need not be unique. However, often topological spaces must beHausdorff spaceswhere limit points are unique.
There exist numerous topologies on any givenfinite set. Such spaces are calledfinite topological spaces. Finite spaces are sometimes used to provide examples or counterexamples to conjectures about topological spaces in general.
Any set can be given thecofinite topologyin which the open sets are the empty set and the sets whose complement is finite. This is the smallestT1topology on any infinite set.[13]
Any set can be given thecocountable topology, in which a set is defined as open if it is either empty or its complement is countable. When the set is uncountable, this topology serves as a counterexample in many situations.
The real line can also be given thelower limit topology. Here, the basic open sets are the half open intervals[a,b).{\displaystyle [a,b).}This topology onR{\displaystyle \mathbb {R} }is strictly finer than the Euclidean topology defined above; a sequence converges to a point in this topology if and only if it converges from above in the Euclidean topology. This example shows that a set may have many distinct topologies defined on it.
Ifγ{\displaystyle \gamma }is anordinal number, then the setγ=[0,γ){\displaystyle \gamma =[0,\gamma )}may be endowed with theorder topologygenerated by the intervals(α,β),{\displaystyle (\alpha ,\beta ),}[0,β),{\displaystyle [0,\beta ),}and(α,γ){\displaystyle (\alpha ,\gamma )}whereα{\displaystyle \alpha }andβ{\displaystyle \beta }are elements ofγ.{\displaystyle \gamma .}
Everymanifoldhas anatural topologysince it is locally Euclidean. Similarly, everysimplexand everysimplicial complexinherits a natural topology from .
TheSierpiński spaceis the simplest non-discrete topological space. It has important relations to thetheory of computationand semantics.
Every subset of a topological space can be given thesubspace topologyin which the open sets are the intersections of the open sets of the larger space with the subset. For anyindexed familyof topological spaces, the product can be given theproduct topology, which is generated by the inverse images of open sets of the factors under theprojectionmappings. For example, in finite products, a basis for the product topology consists of all products of open sets. For infinite products, there is the additional requirement that in a basic open set, all but finitely many of its projections are the entire space. This construction is a special case of aninitial topology.
Aquotient spaceis defined as follows: ifX{\displaystyle X}is a topological space andY{\displaystyle Y}is a set, and iff:X→Y{\displaystyle f:X\to Y}is asurjectivefunction, then the quotient topology onY{\displaystyle Y}is the collection of subsets ofY{\displaystyle Y}that have openinverse imagesunderf.{\displaystyle f.}In other words, the quotient topology is the finest topology onY{\displaystyle Y}for whichf{\displaystyle f}is continuous. A common example of a quotient topology is when anequivalence relationis defined on the topological spaceX.{\displaystyle X.}The mapf{\displaystyle f}is then the natural projection onto the set ofequivalence classes. This construction is a special case of afinal topology.
TheVietoris topologyon the set of all non-empty subsets of a topological spaceX,{\displaystyle X,}named forLeopold Vietoris, is generated by the following basis: for everyn{\displaystyle n}-tupleU1,…,Un{\displaystyle U_{1},\ldots ,U_{n}}of open sets inX,{\displaystyle X,}we construct a basis set consisting of all subsets of the union of theUi{\displaystyle U_{i}}that have non-empty intersections with eachUi.{\displaystyle U_{i}.}
TheFell topologyon the set of all non-empty closed subsets of alocally compactPolish spaceX{\displaystyle X}is a variant of the Vietoris topology, and is named after mathematician James Fell. It is generated by the following basis: for everyn{\displaystyle n}-tupleU1,…,Un{\displaystyle U_{1},\ldots ,U_{n}}of open sets inX{\displaystyle X}and for every compact setK,{\displaystyle K,}the set of all subsets ofX{\displaystyle X}that are disjoint fromK{\displaystyle K}and have nonempty intersections with eachUi{\displaystyle U_{i}}is a member of the basis.
Metric spaces embody ametric, a precise notion of distance between points.
Everymetric spacecan be given a metric topology, in which the basic open sets are open balls defined by the metric. This is the standard topology on anynormed vector space. On a finite-dimensionalvector spacethis topology is the same for all norms.
There are many ways of defining a topology onR,{\displaystyle \mathbb {R} ,}the set ofreal numbers. The standard topology onR{\displaystyle \mathbb {R} }is generated by theopen intervals. The set of all open intervals forms abaseor basis for the topology, meaning that every open set is a union of some collection of sets from the base. In particular, this means that a set is open if there exists an open interval of non zero radius about every point in the set. More generally, theEuclidean spacesRn{\displaystyle \mathbb {R} ^{n}}can be given a topology. In theusual topologyonRn{\displaystyle \mathbb {R} ^{n}}the basic open sets are the openballs. Similarly,C,{\displaystyle \mathbb {C} ,}the set ofcomplex numbers, andCn{\displaystyle \mathbb {C} ^{n}}have a standard topology in which the basic open sets are open balls.
For anyalgebraic objectswe can introduce the discrete topology, under which the algebraic operations are continuous functions. For any such structure that is not finite, we often have a natural topology compatible with the algebraic operations, in the sense that the algebraic operations are still continuous. This leads to concepts such astopological groups,topological rings,topological fieldsandtopological vector spacesover the latter.Local fieldsare topological fields important innumber theory.
TheZariski topologyis defined algebraically on thespectrum of a ringor analgebraic variety. OnRn{\displaystyle \mathbb {R} ^{n}}orCn,{\displaystyle \mathbb {C} ^{n},}the closed sets of the Zariski topology are thesolution setsof systems ofpolynomialequations.
IfΓ{\displaystyle \Gamma }is afilteron a setX{\displaystyle X}then{∅}∪Γ{\displaystyle \{\varnothing \}\cup \Gamma }is a topology onX.{\displaystyle X.}
Many sets oflinear operatorsinfunctional analysisare endowed with topologies that are defined by specifying when a particular sequence of functions converges to the zero function.
Alinear graphhas a natural topology that generalizes many of the geometric aspects ofgraphswithverticesandedges.
Outer spaceof afree groupFn{\displaystyle F_{n}}consists of the so-called "marked metric graph structures" of volume 1 onFn.{\displaystyle F_{n}.}[14]
Topological spaces can be broadly classified,up tohomeomorphism, by theirtopological properties. A topological property is a property of spaces that is invariant under homeomorphisms. To prove that two spaces are not homeomorphic it is sufficient to find a topological property not shared by them. Examples of such properties includeconnectedness,compactness, and variousseparation axioms. For algebraic invariants seealgebraic topology.
|
https://en.wikipedia.org/wiki/Topological_space
|
In themathematicalfield oftopology, auniform spaceis asetwith additionalstructurethat is used to defineuniform properties, such ascompleteness,uniform continuityanduniform convergence. Uniform spaces generalizemetric spacesandtopological groups, but the concept is designed to formulate the weakest axioms needed for most proofs inanalysis.
In addition to the usual properties of a topological structure, in a uniform space one formalizes the notions of relative closeness and closeness of points. In other words, ideas like "xis closer toathanyis tob" make sense in uniform spaces. By comparison, in a general topological space, given setsA,Bit is meaningful to say that a pointxisarbitrarily closetoA(i.e., in theclosureofA), or perhaps thatAis asmaller neighborhoodofxthanB, but notions of closeness of points and relative closeness are not described well by topological structure alone.
There are three equivalent definitions for a uniform space. They all consist of a space equipped with a uniform structure.
This definition adapts the presentation of a topological space in terms ofneighborhood systems. A nonempty collectionΦ{\displaystyle \Phi }of subsets ofX×X{\displaystyle X\times X}is auniform structure(or auniformity) if it satisfies the following axioms:
The non-emptiness ofΦ{\displaystyle \Phi }taken together with (2) and (3) states thatΦ{\displaystyle \Phi }is afilteronX×X.{\displaystyle X\times X.}If the last property is omitted we call the spacequasiuniform. An elementU{\displaystyle U}ofΦ{\displaystyle \Phi }is called avicinityorentouragefrom theFrenchword forsurroundings.
One usually writesU[x]={y:(x,y)∈U}=pr2(U∩({x}×X)),{\displaystyle U[x]=\{y:(x,y)\in U\}=\operatorname {pr} _{2}(U\cap (\{x\}\times X)\,),}whereU∩({x}×X){\displaystyle U\cap (\{x\}\times X)}is the vertical cross section ofU{\displaystyle U}andpr2{\displaystyle \operatorname {pr} _{2}}is the canonical projection onto the second coordinate. On a graph, a typical entourage is drawn as a blob surrounding the "y=x{\displaystyle y=x}" diagonal; all the differentU[x]{\displaystyle U[x]}'s form the vertical cross-sections. If(x,y)∈U{\displaystyle (x,y)\in U}then one says thatx{\displaystyle x}andy{\displaystyle y}areU{\displaystyle U}-close. Similarly, if all pairs of points in a subsetA{\displaystyle A}ofX{\displaystyle X}areU{\displaystyle U}-close (that is, ifA×A{\displaystyle A\times A}is contained inU{\displaystyle U}),A{\displaystyle A}is calledU{\displaystyle U}-small. An entourageU{\displaystyle U}issymmetricif(x,y)∈U{\displaystyle (x,y)\in U}precisely when(y,x)∈U.{\displaystyle (y,x)\in U.}The first axiom states that each point isU{\displaystyle U}-close to itself for each entourageU.{\displaystyle U.}The third axiom guarantees that being "bothU{\displaystyle U}-close andV{\displaystyle V}-close" is also a closeness relation in the uniformity. The fourth axiom states that for each entourageU{\displaystyle U}there is an entourageV{\displaystyle V}that is "not more than half as large". Finally, the last axiom states that the property "closeness" with respect to a uniform structure is symmetric inx{\displaystyle x}andy.{\displaystyle y.}
Abase of entouragesorfundamental system of entourages(orvicinities) of a uniformityΦ{\displaystyle \Phi }is any setB{\displaystyle {\mathcal {B}}}of entourages ofΦ{\displaystyle \Phi }such that every entourage ofΦ{\displaystyle \Phi }contains a set belonging toB.{\displaystyle {\mathcal {B}}.}Thus, by property 2 above, a fundamental systems of entouragesB{\displaystyle {\mathcal {B}}}is enough to specify the uniformityΦ{\displaystyle \Phi }unambiguously:Φ{\displaystyle \Phi }is the set of subsets ofX×X{\displaystyle X\times X}that contain a set ofB.{\displaystyle {\mathcal {B}}.}Every uniform space has a fundamental system of entourages consisting of symmetric entourages.
Intuition about uniformities is provided by the example ofmetric spaces: if(X,d){\displaystyle (X,d)}is a metric space, the setsUa={(x,y)∈X×X:d(x,y)≤a}wherea>0{\displaystyle U_{a}=\{(x,y)\in X\times X:d(x,y)\leq a\}\quad {\text{where}}\quad a>0}form a fundamental system of entourages for the standard uniform structure ofX.{\displaystyle X.}Thenx{\displaystyle x}andy{\displaystyle y}areUa{\displaystyle U_{a}}-close precisely when the distance betweenx{\displaystyle x}andy{\displaystyle y}is at mosta.{\displaystyle a.}
A uniformityΦ{\displaystyle \Phi }isfinerthan another uniformityΨ{\displaystyle \Psi }on the same set ifΦ⊇Ψ;{\displaystyle \Phi \supseteq \Psi ;}in that caseΨ{\displaystyle \Psi }is said to becoarserthanΦ.{\displaystyle \Phi .}
Uniform spaces may be defined alternatively and equivalently using systems ofpseudometrics, an approach that is particularly useful infunctional analysis(with pseudometrics provided byseminorms). More precisely, letf:X×X→R{\displaystyle f:X\times X\to \mathbb {R} }be a pseudometric on a setX.{\displaystyle X.}The inverse imagesUa=f−1([0,a]){\displaystyle U_{a}=f^{-1}([0,a])}fora>0{\displaystyle a>0}can be shown to form a fundamental system of entourages of a uniformity. The uniformity generated by theUa{\displaystyle U_{a}}is the uniformity defined by the single pseudometricf.{\displaystyle f.}Certain authors call spaces the topology of which is defined in terms of pseudometricsgauge spaces.
For afamily(fi){\displaystyle \left(f_{i}\right)}of pseudometrics onX,{\displaystyle X,}the uniform structure defined by the family is theleast upper boundof the uniform structures defined by the individual pseudometricsfi.{\displaystyle f_{i}.}A fundamental system of entourages of this uniformity is provided by the set offiniteintersections of entourages of the uniformities defined by the individual pseudometricsfi.{\displaystyle f_{i}.}If the family of pseudometrics isfinite, it can be seen that the same uniform structure is defined by asinglepseudometric, namely theupper envelopesupfi{\displaystyle \sup _{}f_{i}}of the family.
Less trivially, it can be shown that a uniform structure that admits acountablefundamental system of entourages (hence in particular a uniformity defined by a countable family of pseudometrics) can be defined by a single pseudometric. A consequence is thatanyuniform structure can be defined as above by a (possibly uncountable) family of pseudometrics (see Bourbaki: General Topology Chapter IX §1 no. 4).
Auniform space(X,Θ){\displaystyle (X,\Theta )}is a setX{\displaystyle X}equipped with a distinguished family of coveringsΘ,{\displaystyle \Theta ,}called "uniform covers", drawn from the set ofcoveringsofX,{\displaystyle X,}that form afilterwhen ordered by star refinement. One says that a coverP{\displaystyle \mathbf {P} }is astar refinementof coverQ,{\displaystyle \mathbf {Q} ,}writtenP<∗Q,{\displaystyle \mathbf {P} <^{*}\mathbf {Q} ,}if for everyA∈P,{\displaystyle A\in \mathbf {P} ,}there is aU∈Q{\displaystyle U\in \mathbf {Q} }such that ifA∩B≠∅,B∈P,{\displaystyle A\cap B\neq \varnothing ,B\in \mathbf {P} ,}thenB⊆U.{\displaystyle B\subseteq U.}Axiomatically, the condition of being a filter reduces to:
Given a pointx{\displaystyle x}and a uniform coverP,{\displaystyle \mathbf {P} ,}one can consider the union of the members ofP{\displaystyle \mathbf {P} }that containx{\displaystyle x}as a typical neighbourhood ofx{\displaystyle x}of "size"P,{\displaystyle \mathbf {P} ,}and this intuitive measure applies uniformly over the space.
Given a uniform space in the entourage sense, define a coverP{\displaystyle \mathbf {P} }to be uniform if there is some entourageU{\displaystyle U}such that for eachx∈X,{\displaystyle x\in X,}there is anA∈P{\displaystyle A\in \mathbf {P} }such thatU[x]⊆A.{\displaystyle U[x]\subseteq A.}These uniform covers form a uniform space as in the second definition. Conversely, given a uniform space in the uniform cover sense, the supersets of⋃{A×A:A∈P},{\displaystyle \bigcup \{A\times A:A\in \mathbf {P} \},}asP{\displaystyle \mathbf {P} }ranges over the uniform covers, are the entourages for a uniform space as in the first definition. Moreover, these two transformations are inverses of each other.[1]
Every uniform spaceX{\displaystyle X}becomes atopological spaceby defining a nonempty subsetO⊆X{\displaystyle O\subseteq X}to be open if and only if for everyx∈O{\displaystyle x\in O}there exists an entourageV{\displaystyle V}such thatV[x]{\displaystyle V[x]}is a subset ofO.{\displaystyle O.}In this topology, the neighbourhood filter of a pointx{\displaystyle x}is{V[x]:V∈Φ}.{\displaystyle \{V[x]:V\in \Phi \}.}This can be proved with a recursive use of the existence of a "half-size" entourage. Compared to a general topological space the existence of the uniform structure makes possible the comparison of sizes of neighbourhoods:V[x]{\displaystyle V[x]}andV[y]{\displaystyle V[y]}are considered to be of the "same size".
The topology defined by a uniform structure is said to beinduced by the uniformity. A uniform structure on a topological space iscompatiblewith the topology if the topology defined by the uniform structure coincides with the original topology. In general several different uniform structures can be compatible with a given topology onX.{\displaystyle X.}
A topological space is calleduniformizableif there is a uniform structure compatible with the topology.
Every uniformizable space is acompletely regulartopological space. Moreover, for a uniformizable spaceX{\displaystyle X}the following are equivalent:
Some authors (e.g. Engelking) add this last condition directly in the definition of a uniformizable space.
The topology of a uniformizable space is always asymmetric topology; that is, the space is anR0-space.
Conversely, each completely regular space is uniformizable. A uniformity compatible with the topology of a completely regular spaceX{\displaystyle X}can be defined as the coarsest uniformity that makes all continuous real-valued functions onX{\displaystyle X}uniformly continuous. A fundamental system of entourages for this uniformity is provided by all finite intersections of sets(f×f)−1(V),{\displaystyle (f\times f)^{-1}(V),}wheref{\displaystyle f}is a continuous real-valued function onX{\displaystyle X}andV{\displaystyle V}is an entourage of the uniform spaceR.{\displaystyle \mathbf {R} .}This uniformity defines a topology, which is clearly coarser than the original topology ofX;{\displaystyle X;}that it is also finer than the original topology (hence coincides with it) is a simple consequence of complete regularity: for anyx∈X{\displaystyle x\in X}and a neighbourhoodX{\displaystyle X}ofx,{\displaystyle x,}there is a continuous real-valued functionf{\displaystyle f}withf(x)=0{\displaystyle f(x)=0}and equal to 1 in the complement ofV.{\displaystyle V.}
In particular, a compact Hausdorff space is uniformizable. In fact, for a compact Hausdorff spaceX{\displaystyle X}the set of all neighbourhoods of the diagonal inX×X{\displaystyle X\times X}form theuniqueuniformity compatible with the topology.
A Hausdorff uniform space ismetrizableif its uniformity can be defined by acountablefamily of pseudometrics. Indeed, as discussedabove, such a uniformity can be defined by asinglepseudometric, which is necessarily a metric if the space is Hausdorff. In particular, if the topology of avector spaceis Hausdorff and definable by a countable family ofseminorms, it is metrizable.
Similar tocontinuous functionsbetweentopological spaces, which preservetopological properties, are theuniformly continuous functionsbetween uniform spaces, which preserve uniform properties.
A uniformly continuous function is defined as one where inverse images of entourages are again entourages, or equivalently, one where the inverse images of uniform covers are again uniform covers. Explicitly, a functionf:X→Y{\displaystyle f:X\to Y}between uniform spaces is calleduniformly continuousif for every entourageV{\displaystyle V}inY{\displaystyle Y}there exists an entourageU{\displaystyle U}inX{\displaystyle X}such that if(x1,x2)∈U{\displaystyle \left(x_{1},x_{2}\right)\in U}then(f(x1),f(x2))∈V;{\displaystyle \left(f\left(x_{1}\right),f\left(x_{2}\right)\right)\in V;}or in other words, wheneverV{\displaystyle V}is an entourage inY{\displaystyle Y}then(f×f)−1(V){\displaystyle (f\times f)^{-1}(V)}is an entourage inX{\displaystyle X}, wheref×f:X×X→Y×Y{\displaystyle f\times f:X\times X\to Y\times Y}is defined by(f×f)(x1,x2)=(f(x1),f(x2)).{\displaystyle (f\times f)\left(x_{1},x_{2}\right)=\left(f\left(x_{1}\right),f\left(x_{2}\right)\right).}
All uniformly continuous functions are continuous with respect to the induced topologies.
Uniform spaces with uniform maps form acategory. Anisomorphismbetween uniform spaces is called auniform isomorphism; explicitly, it is auniformly continuousbijectionwhoseinverseis also uniformly continuous.
Auniform embeddingis an injective uniformly continuous mapi:X→Y{\displaystyle i:X\to Y}between uniform spaces whose inversei−1:i(X)→X{\displaystyle i^{-1}:i(X)\to X}is also uniformly continuous, where the imagei(X){\displaystyle i(X)}has the subspace uniformity inherited fromY.{\displaystyle Y.}
Generalizing the notion ofcomplete metric space, one can also define completeness for uniform spaces. Instead of working withCauchy sequences, one works withCauchy filters(orCauchy nets).
ACauchy filter(respectively, aCauchy prefilter)F{\displaystyle F}on a uniform spaceX{\displaystyle X}is afilter(respectively, aprefilter)F{\displaystyle F}such that for every entourageU,{\displaystyle U,}there existsA∈F{\displaystyle A\in F}withA×A⊆U.{\displaystyle A\times A\subseteq U.}In other words, a filter is Cauchy if it contains "arbitrarily small" sets. It follows from the definitions that each filter that converges (with respect to the topology defined by the uniform structure) is a Cauchy filter.
Aminimal Cauchy filteris a Cauchy filter that does not contain any smaller (that is, coarser) Cauchy filter (other than itself). It can be shown that every Cauchy filter contains a uniqueminimal Cauchy filter. The neighbourhood filter of each point (the filter consisting of all neighbourhoods of the point) is a minimal Cauchy filter.
Conversely, a uniform space is calledcompleteif every Cauchy filter converges. Any compact Hausdorff space is a complete uniform space with respect to the unique uniformity compatible with the topology.
Complete uniform spaces enjoy the following important property: iff:A→Y{\displaystyle f:A\to Y}is auniformly continuousfunction from adensesubsetA{\displaystyle A}of a uniform spaceX{\displaystyle X}into acompleteuniform spaceY,{\displaystyle Y,}thenf{\displaystyle f}can be extended (uniquely) into a uniformly continuous function on all ofX.{\displaystyle X.}
A topological space that can be made into a complete uniform space, whose uniformity induces the original topology, is called acompletely uniformizable space.
Acompletionof a uniform spaceX{\displaystyle X}is a pair(i,C){\displaystyle (i,C)}consisting of a complete uniform spaceC{\displaystyle C}and auniform embeddingi:X→C{\displaystyle i:X\to C}whose imagei(X){\displaystyle i(X)}is adense subsetofC.{\displaystyle C.}
As with metric spaces, every uniform spaceX{\displaystyle X}has aHausdorff completion: that is, there exists a complete Hausdorff uniform spaceY{\displaystyle Y}and a uniformly continuous mapi:X→Y{\displaystyle i:X\to Y}(ifX{\displaystyle X}is a Hausdorff uniform space theni{\displaystyle i}is atopological embedding) with the following property:
The Hausdorff completionY{\displaystyle Y}is unique up to isomorphism. As a set,Y{\displaystyle Y}can be taken to consist of theminimalCauchy filters onX.{\displaystyle X.}As the neighbourhood filterB(x){\displaystyle \mathbf {B} (x)}of each pointx{\displaystyle x}inX{\displaystyle X}is a minimal Cauchy filter, the mapi{\displaystyle i}can be defined by mappingx{\displaystyle x}toB(x).{\displaystyle \mathbf {B} (x).}The mapi{\displaystyle i}thus defined is in general not injective; in fact, the graph of the equivalence relationi(x)=i(x′){\displaystyle i(x)=i(x')}is the intersection of all entourages ofX,{\displaystyle X,}and thusi{\displaystyle i}is injective precisely whenX{\displaystyle X}is Hausdorff.
The uniform structure onY{\displaystyle Y}is defined as follows: for eachsymmetricentourageV{\displaystyle V}(that is, such that(x,y)∈V{\displaystyle (x,y)\in V}implies(y,x)∈V{\displaystyle (y,x)\in V}), letC(V){\displaystyle C(V)}be the set of all pairs(F,G){\displaystyle (F,G)}of minimal Cauchy filterswhich have in common at least oneV{\displaystyle V}-small set. The setsC(V){\displaystyle C(V)}can be shown to form a fundamental system of entourages;Y{\displaystyle Y}is equipped with the uniform structure thus defined.
The seti(X){\displaystyle i(X)}is then a dense subset ofY.{\displaystyle Y.}IfX{\displaystyle X}is Hausdorff, theni{\displaystyle i}is an isomorphism ontoi(X),{\displaystyle i(X),}and thusX{\displaystyle X}can be identified with a dense subset of its completion. Moreover,i(X){\displaystyle i(X)}is always Hausdorff; it is called theHausdorff uniform space associated withX.{\displaystyle X.}IfR{\displaystyle R}denotes the equivalence relationi(x)=i(x′),{\displaystyle i(x)=i(x'),}then the quotient spaceX/R{\displaystyle X/R}is homeomorphic toi(X).{\displaystyle i(X).}
Ua≜d−1([0,a])={(m,n)∈M×M:d(m,n)≤a}.{\displaystyle \qquad U_{a}\triangleq d^{-1}([0,a])=\{(m,n)\in M\times M:d(m,n)\leq a\}.}
BeforeAndré Weilgave the first explicit definition of a uniform structure in 1937, uniform concepts, like completeness, were discussed usingmetric spaces.Nicolas Bourbakiprovided the definition of uniform structure in terms of entourages in the bookTopologie GénéraleandJohn Tukeygave the uniform cover definition. Weil also characterized uniform spaces in terms of a family of pseudometrics.
|
https://en.wikipedia.org/wiki/Uniform_space
|
Inmathematics,Choquet theory, named afterGustave Choquet, is an area offunctional analysisandconvex analysisconcerned withmeasureswhich havesupporton theextreme pointsof aconvex setC. Roughly speaking, everyvectorofCshould appear as a weighted average of extreme points, a concept made more precise by generalizing the notion of weighted average from aconvex combinationto anintegraltaken over the setEof extreme points. HereCis a subset of areal vector spaceV, and the main thrust of the theory is to treat the cases whereVis an infinite-dimensional (locally convex Hausdorff)topological vector spacealong lines similar to the finite-dimensional case. The main concerns of Gustave Choquet were inpotential theory. Choquet theory has become a general paradigm, particularly for treatingconvex conesas determined by their extremerays, and so for many different notions ofpositivityin mathematics.
The two ends of aline segmentdetermine the points in between: in vector terms the segment fromvtowconsists of the λv+ (1 − λ)wwith 0 ≤ λ ≤ 1. The classical result ofHermann Minkowskisays that inEuclidean space, abounded,closedconvex setCis theconvex hullof its extreme point setE, so that anycinCis a (finite)convex combinationof pointseofE. HereEmay be a finite or aninfinite set. In vector terms, by assigning non-negative weightsw(e) to theeinE,almost all0, we can represent anycinCasc=∑e∈Ew(e)e{\displaystyle c=\sum _{e\in E}w(e)e\ }with∑e∈Ew(e)=1.{\displaystyle \sum _{e\in E}w(e)=1.\ }
In any case thew(e) give aprobability measuresupported on a finite subset ofE. For anyaffine functionfonC, its value at the pointcisf(c)=∫f(e)dw(e).{\displaystyle f(c)=\int f(e)dw(e).}
In the infinite dimensional setting, one would like to make a similar statement.
In practiceVwill be aBanach space. The originalKrein–Milman theoremfollows from Choquet's result. Another corollary is theRiesz representation theoremforstateson the continuous functions on a metrizable compact Hausdorff space.
More generally, forValocally convex topological vector space, theChoquet–Bishop–de Leeuw theorem[1]gives the same formal statement.
In addition to the existence of a probability measure supported on the extreme boundary that represents a given pointc, one might also consider the uniqueness of such measures. It is easy to see that uniqueness does not hold even in the finite dimensional setting. One can take, for counterexamples, the convex set to be acubeor a ball inR3. Uniqueness does hold, however, when the convex set is a finite dimensionalsimplex. A finite dimensional simplex is a special case of aChoquet simplex. Any point in a Choquet simplex is represented by a unique probability measure on the extreme points.
|
https://en.wikipedia.org/wiki/Choquet_theory
|
TheHewitt–Savage zero–one lawis atheoreminprobability theory, similar toKolmogorov's zero–one lawand theBorel–Cantelli lemma, that specifies that a certain type of event will eitheralmost surelyhappen or almost surely not happen. It is sometimes known as theSavage-Hewitt law for symmetric events. It is named afterEdwin HewittandLeonard Jimmie Savage.[1]
Let{Xn}n=1∞{\displaystyle \left\{X_{n}\right\}_{n=1}^{\infty }}be asequenceofindependent and identically-distributed random variablestaking values in a setX{\displaystyle \mathbb {X} }. The Hewitt-Savage zero–one law says that any event whose occurrence or non-occurrence is determined by the values of these random variables and whose occurrence or non-occurrence is unchanged by finitepermutationsof the indices, hasprobabilityeither 0 or 1 (a "finite" permutation is one that leaves all but finitely many of the indices fixed).
Somewhat more abstractly, define theexchangeable sigma algebraorsigma algebra of symmetric eventsE{\displaystyle {\mathcal {E}}}to be the set of events (depending on the sequence of variables{Xn}n=1∞{\displaystyle \left\{X_{n}\right\}_{n=1}^{\infty }}) which are invariant underfinitepermutationsof the indices in the sequence{Xn}n=1∞{\displaystyle \left\{X_{n}\right\}_{n=1}^{\infty }}. ThenA∈E⟹P(A)∈{0,1}{\displaystyle A\in {\mathcal {E}}\implies \mathbb {P} (A)\in \{0,1\}}.
Since any finite permutation can be written as a product oftranspositions, if we wish to check whether or not an eventA{\displaystyle A}is symmetric (lies inE{\displaystyle {\mathcal {E}}}), it is enough to check if its occurrence is unchanged by an arbitrary transposition(i,j){\displaystyle (i,j)},i,j∈N{\displaystyle i,j\in \mathbb {N} }.
Let the sequence{Xn}n=1∞{\displaystyle \left\{X_{n}\right\}_{n=1}^{\infty }}of independent and identically distributed random variables take values in[0,∞){\displaystyle [0,\infty )}. Then the event that the series∑n=1∞Xn{\displaystyle \sum _{n=1}^{\infty }X_{n}}converges (to a finite value) is a symmetric event inE{\displaystyle {\mathcal {E}}}, since its occurrence is unchanged under transpositions (for a finite re-ordering, the convergence or divergence of the series—and, indeed, the numerical value of the sum itself—is independent of the order in which we add up the terms). Thus, the series either converges almost surely or diverges almost surely. If we assume in addition that the commonexpected valueE[Xn]>0{\displaystyle \mathbb {E} [X_{n}]>0}(which essentially means thatP(Xn=0)<1{\displaystyle \mathbb {P} (X_{n}=0)<1}because of the random variables' non-negativity), we may conclude that
i.e. the series diverges almost surely. This is a particularly simple application of the Hewitt–Savage zero–one law. In many situations, it can be easy to apply the Hewitt–Savage zero–one law to show that some event has probability 0 or 1, but surprisingly hard to determinewhichof these two extreme values is the correct one.
Continuing with the previous example, define
which is the position at stepNof arandom walkwith theiidincrementsXn. The event {SN= 0 infinitely often } is invariant under finite permutations. Therefore, the zero–one law is applicable and one infers that the probability of a random walk with real iid increments visiting the origin infinitely often is either one or zero. Visiting the origin infinitely often is a tail event with respect to the sequence (SN), butSNare not independent and therefore theKolmogorov's zero–one lawis not directly applicable here.[2]
|
https://en.wikipedia.org/wiki/Hewitt%E2%80%93Savage_zero%E2%80%93one_law
|
In themathematical theoryoffunctional analysis, theKrein–Milman theoremis apropositionaboutcompactconvex setsinlocally convextopological vector spaces(TVSs).
Krein–Milman theorem[1]—Acompactconvexsubset of aHausdorfflocally convextopological vector spaceis equal to the closedconvex hullof itsextreme points.
This theorem generalizes to infinite-dimensional spaces and to arbitrary compact convex sets the following basic observation: a convex (i.e. "filled") triangle, including its perimeter and the area "inside of it", is equal to the convex hull of its three vertices, where these vertices are exactly the extreme points of this shape.
This observation also holds for any other convexpolygonin the planeR2.{\displaystyle \mathbb {R} ^{2}.}
Throughout,X{\displaystyle X}will be arealorcomplexvector space.
For any elementsx{\displaystyle x}andy{\displaystyle y}in a vector space, the set[x,y]:={tx+(1−t)y:0≤t≤1}{\displaystyle [x,y]:=\{tx+(1-t)y:0\leq t\leq 1\}}is called theclosed line segmentorclosed intervalbetweenx{\displaystyle x}andy.{\displaystyle y.}Theopen line segmentoropen intervalbetweenx{\displaystyle x}andy{\displaystyle y}is(x,y):=∅{\displaystyle (x,y):=\varnothing }whenx=y{\displaystyle x=y}while it is(x,y):={tx+(1−t)y:0<t<1}{\displaystyle (x,y):=\{tx+(1-t)y:0<t<1\}}whenx≠y;{\displaystyle x\neq y;}[2]it satisfies(x,y)=[x,y]∖{x,y}{\displaystyle (x,y)=[x,y]\setminus \{x,y\}}and[x,y]=(x,y)∪{x,y}.{\displaystyle [x,y]=(x,y)\cup \{x,y\}.}The pointsx{\displaystyle x}andy{\displaystyle y}are called theendpointsof these interval. An interval is said to benon-degenerateorproperif its endpoints are distinct.
The intervals[x,x]={x}{\displaystyle [x,x]=\{x\}}and[x,y]{\displaystyle [x,y]}always contain their endpoints while(x,x)=∅{\displaystyle (x,x)=\varnothing }and(x,y){\displaystyle (x,y)}never contain either of their endpoints.
Ifx{\displaystyle x}andy{\displaystyle y}are points in the real lineR{\displaystyle \mathbb {R} }then the above definition of[x,y]{\displaystyle [x,y]}is the same as its usual definition as aclosed interval.
For anyp,x,y∈X,{\displaystyle p,x,y\in X,}the pointp{\displaystyle p}is said to (strictly)lie betweenx{\displaystyle x}andy{\displaystyle y}ifp{\displaystyle p}belongs to the open line segment(x,y).{\displaystyle (x,y).}[2]
IfK{\displaystyle K}is a subset ofX{\displaystyle X}andp∈K,{\displaystyle p\in K,}thenp{\displaystyle p}is called anextreme pointofK{\displaystyle K}if it does not lie between any twodistinctpoints ofK.{\displaystyle K.}That is, if there doesnotexistx,y∈K{\displaystyle x,y\in K}and0<t<1{\displaystyle 0<t<1}such thatx≠y{\displaystyle x\neq y}andp=tx+(1−t)y.{\displaystyle p=tx+(1-t)y.}In this article, the set of all extreme points ofK{\displaystyle K}will be denoted byextreme(K).{\displaystyle \operatorname {extreme} (K).}[2]
For example, the vertices of any convex polygon in the planeR2{\displaystyle \mathbb {R} ^{2}}are the extreme points of that polygon.
The extreme points of theclosed unit diskinR2{\displaystyle \mathbb {R} ^{2}}is theunit circle.
Everyopen intervaland degenerate closed interval inR{\displaystyle \mathbb {R} }has no extreme points while the extreme points of a non-degenerateclosed interval[x,y]{\displaystyle [x,y]}arex{\displaystyle x}andy.{\displaystyle y.}
A setS{\displaystyle S}is calledconvexif for any two pointsx,y∈S,{\displaystyle x,y\in S,}S{\displaystyle S}contains the line segment[x,y].{\displaystyle [x,y].}The smallest convex set containingS{\displaystyle S}is called theconvex hullofS{\displaystyle S}and it is denoted bycoS.{\displaystyle \operatorname {co} S.}Theclosed convex hullof a setS,{\displaystyle S,}denoted byco¯(S),{\displaystyle {\overline {\operatorname {co} }}(S),}is the smallest closed and convex set containingS.{\displaystyle S.}It is also equal to theintersectionof all closed convex subsets that containS{\displaystyle S}and to theclosureof theconvex hullofS{\displaystyle S}; that is,co¯(S)=co(S)¯,{\displaystyle {\overline {\operatorname {co} }}(S)={\overline {\operatorname {co} (S)}},}where the right hand side denotes the closure ofco(S){\displaystyle \operatorname {co} (S)}while the left hand side is notation.
For example, the convex hull of any set of three distinct points forms either a closed line segment (if they arecollinear) or else a solid (that is, "filled") triangle, including its perimeter.
And in the planeR2,{\displaystyle \mathbb {R} ^{2},}the unit circle isnotconvex but the closed unit disk is convex and furthermore, this disk is equal to the convex hull of the circle.
The separable Hilbert spaceLp spaceℓ2(N){\displaystyle \ell ^{2}(\mathbb {N} )}of square-summable sequences with the usual norm‖⋅‖2{\displaystyle \|\cdot \|_{2}}has a compact subsetS{\displaystyle S}whose convex hullco(S){\displaystyle \operatorname {co} (S)}isnotclosed and thus alsonotcompact.[3]However, like in allcompleteHausdorff locally convex spaces, theclosedconvex hullco¯S{\displaystyle {\overline {\operatorname {co} }}S}of this compact subset is compact.[4]But, if a Hausdorff locally convex space is not complete, then it is in generalnotguaranteed thatco¯S{\displaystyle {\overline {\operatorname {co} }}S}is compact wheneverS{\displaystyle S}is; an example can even be found in a (non-complete)pre-Hilbertvector subspace ofℓ2(N).{\displaystyle \ell ^{2}(\mathbb {N} ).}Every compact subset istotally bounded(also called "precompact") and the closed convex hull of a totally bounded subset of a Hausdorff locally convex space is guaranteed to be totally bounded.[5]
Krein–Milman theorem[6]—IfK{\displaystyle K}is a compact subset of aHausdorfflocally convextopological vector spacethen the set ofextreme pointsofK{\displaystyle K}has the same closed convex hull asK.{\displaystyle K.}
In the case where the compact setK{\displaystyle K}is also convex, the above theorem has as a corollary the first part of the next theorem,[6]which is also often called the Krein–Milman theorem.
Krein–Milman theorem[2]—SupposeX{\displaystyle X}is aHausdorfflocally convextopological vector space(for example, anormed space) andK{\displaystyle K}is a compact and convex subset ofX.{\displaystyle X.}ThenK{\displaystyle K}is equal to the closed convex hull of itsextreme points:K=co¯(extreme(K)).{\displaystyle K~=~{\overline {\operatorname {co} }}(\operatorname {extreme} (K)).}
Moreover, ifB⊆K{\displaystyle B\subseteq K}thenK{\displaystyle K}is equal to the closed convex hull ofB{\displaystyle B}if and only ifextremeK⊆clB,{\displaystyle \operatorname {extreme} K\subseteq \operatorname {cl} B,}whereclB{\displaystyle \operatorname {cl} B}is closure ofB.{\displaystyle B.}
The convex hull of the extreme points ofK{\displaystyle K}forms a convex subset ofK{\displaystyle K}so the main burden of the proof is to show that there are enough extreme points so that their convex hull covers all ofK.{\displaystyle K.}For this reason, the following corollary to the above theorem is also often called the Krein–Milman theorem.
(KM) Krein–Milman theorem (Existence)[2]—Every non-empty compact convex subset of aHausdorfflocally convextopological vector spacehas anextreme point; that is, the set of its extreme points is not empty.
To visualized this theorem and its conclusion, consider the particular case whereK{\displaystyle K}is a convexpolygon.
In this case, the corners of the polygon (which are its extreme points) are all that is needed to recover the polygon shape.
The statement of the theorem is false if the polygon is not convex, as then there are many ways of drawing a polygon having given points as corners.
The requirement that the convex setK{\displaystyle K}be compact can be weakened to give the following strengthened generalization version of the theorem.[7]
(SKM)Strong Krein–Milman theorem (Existence)[8]—SupposeX{\displaystyle X}is aHausdorfflocally convextopological vector spaceandK{\displaystyle K}is a non-empty convex subset ofX{\displaystyle X}with the property that wheneverC{\displaystyle {\mathcal {C}}}is a cover ofK{\displaystyle K}byconvexclosed subsets ofX{\displaystyle X}such that{K∩C:C∈C}{\displaystyle \{K\cap C:C\in {\mathcal {C}}\}}has thefinite intersection property, thenK∩⋂C∈CC{\displaystyle K\cap \bigcap _{C\in {\mathcal {C}}}C}is not empty.
Thenextreme(K){\displaystyle \operatorname {extreme} (K)}is not empty.
The property above is sometimes calledquasicompactnessorconvex compactness.Compactnessimpliesconvex compactnessbecause atopological spaceis compact if and only if everyfamilyof closed subsets having thefinite intersection property(FIP) has non-empty intersection (that is, itskernelis not empty).
Thedefinition of convex compactnessis similar to this characterization ofcompact spacesin terms of the FIP, except that it only involves those closed subsets that are alsoconvex(rather than all closed subsets).
The assumption oflocal convexityfor the ambient space is necessary, because James Roberts (1977) constructed a counter-example for the non-locally convex spaceLp[0,1]{\displaystyle L^{p}[0,1]}where0<p<1.{\displaystyle 0<p<1.}[9]
Linearity is also needed, because the statement fails for weakly compact convex sets inCAT(0) spaces, as proved byNicolas Monod(2016).[10]However, Theo Buehler (2006) proved that the Krein–Milman theorem does hold formetricallycompact CAT(0) spaces.[11]
Under the previous assumptions onK,{\displaystyle K,}ifT{\displaystyle T}is asubsetofK{\displaystyle K}and the closed convex hull ofT{\displaystyle T}is all ofK,{\displaystyle K,}then everyextreme pointofK{\displaystyle K}belongs to theclosureofT.{\displaystyle T.}This result is known asMilman's(partial)converseto the Krein–Milman theorem.[12]
TheChoquet–Bishop–de Leeuw theoremstates that every point inK{\displaystyle K}is thebarycenterof aprobability measuresupported on the set ofextreme pointsofK.{\displaystyle K.}
Under theZermelo–Fraenkel set theory(ZF) axiomatic framework, theaxiom of choice(AC) suffices to prove all versions of the Krein–Milman theorem given above, including statementKMand its generalizationSKM.
The axiom of choice also implies, but is not equivalent to, theBoolean prime ideal theorem(BPI), which is equivalent to theBanach–Alaoglu theorem.
Conversely, the Krein–Milman theoremKMtogether with theBoolean prime ideal theorem(BPI) imply the axiom of choice.[13]In summary,ACholds if and only if bothKMandBPIhold.[8]It follows that underZF, the axiom of choice is equivalent to the following statement:
Furthermore,SKMtogether with theHahn–Banach theoremforreal vector spaces(HB) are also equivalent to the axiom of choice.[8]It is known thatBPIimpliesHB, but that it is not equivalent to it (said differently,BPIis strictly stronger thanHB).
The original statement proved byMark KreinandDavid Milman(1940) was somewhat less general than the form stated here.[14]
Earlier,Hermann Minkowski(1911) proved that ifX{\displaystyle X}is3-dimensionalthenK{\displaystyle K}equals the convex hull of the set of its extreme points.[15]This assertion was expanded to the case of any finite dimension byErnst Steinitz(1916).[16]The Krein–Milman theorem generalizes this to arbitrary locally convexX{\displaystyle X}; however, to generalize from finite to infinite dimensional spaces, it is necessary to use the closure.
This article incorporates material from Krein–Milman theorem onPlanetMath, which is licensed under theCreative Commons Attribution/Share-Alike License.
|
https://en.wikipedia.org/wiki/Krein%E2%80%93Milman_theorem
|
Inmathematics, especially inprobability theoryandergodic theory, theinvariant sigma-algebrais asigma-algebraformed by sets which areinvariantunder agroup actionordynamical system. It can be interpreted as of being "indifferent" to the dynamics.
The invariant sigma-algebra appears in the study ofergodic systems, as well as in theorems ofprobability theorysuch asde Finetti's theoremand theHewitt-Savage law.
Let(X,F){\displaystyle (X,{\mathcal {F}})}be ameasurable space, and letT:(X,F)→(X,F){\displaystyle T:(X,{\mathcal {F}})\to (X,{\mathcal {F}})}be ameasurable function. A measurable subsetS∈F{\displaystyle S\in {\mathcal {F}}}is calledinvariantif and only ifT−1(S)=S{\displaystyle T^{-1}(S)=S}.[1][2][3]Equivalently, if for everyx∈X{\displaystyle x\in X}, we have thatx∈S{\displaystyle x\in S}if and only ifT(x)∈S{\displaystyle T(x)\in S}.
More generally, letM{\displaystyle M}be agroupor amonoid, letα:M×X→X{\displaystyle \alpha :M\times X\to X}be amonoid action, and denote the action ofm∈M{\displaystyle m\in M}onX{\displaystyle X}byαm:X→X{\displaystyle \alpha _{m}:X\to X}.
A subsetS⊆X{\displaystyle S\subseteq X}isα{\displaystyle \alpha }-invariantif for everym∈M{\displaystyle m\in M},αm−1(S)=S{\displaystyle \alpha _{m}^{-1}(S)=S}.
Let(X,F){\displaystyle (X,{\mathcal {F}})}be ameasurable space, and letT:(X,F)→(X,F){\displaystyle T:(X,{\mathcal {F}})\to (X,{\mathcal {F}})}be ameasurable function. A measurable subset (event)S∈F{\displaystyle S\in {\mathcal {F}}}is calledalmost surelyinvariantif and only if itsindicator function1S{\displaystyle 1_{S}}isalmost surelyequal to the indicator function1T−1(S){\displaystyle 1_{T^{-1}(S)}}.[4][5][3]
Similarly, given a measure-preservingMarkov kernelk:(X,F,p)→(X,F,p){\displaystyle k:(X,{\mathcal {F}},p)\to (X,{\mathcal {F}},p)}, we call an eventS∈F{\displaystyle S\in {\mathcal {F}}}almost surely invariantif and only ifk(S∣x)=1S(x){\displaystyle k(S\mid x)=1_{S}(x)}for almost allx∈X{\displaystyle x\in X}.
As for the case of strictly invariant sets, the definition can be extended to an arbitrary group or monoid action.
In many cases, almost surely invariant sets differ by invariant sets only by a null set (see below).
Both strictly invariant sets and almost surely invariant sets are closed under taking countable unions and complements, and hence they formsigma-algebras.
These sigma-algebras are usually called either theinvariant sigma-algebraor thesigma-algebra of invariant events, both in the strict case and in the almost sure case, depending on the author.[1][2][3][4][5]For the purpose of the article, let's denote byI{\displaystyle {\mathcal {I}}}the sigma-algebra of strictly invariant sets, and byI~{\displaystyle {\tilde {\mathcal {I}}}}the sigma-algebra of almost surely invariant sets.
Given a measurable space(X,A){\displaystyle (X,{\mathcal {A}})}, denote by(XN,A⊗N){\displaystyle (X^{\mathbb {N} },{\mathcal {A}}^{\otimes \mathbb {N} })}be the countablecartesian powerofX{\displaystyle X}, equipped with theproduct sigma-algebra.
We can viewXN{\displaystyle X^{\mathbb {N} }}as the space of infinite sequences of elements ofX{\displaystyle X},
Consider now the groupS∞{\displaystyle S_{\infty }}offinitepermutationsofN{\displaystyle \mathbb {N} }, i.e.bijectionsσ:N→N{\displaystyle \sigma :\mathbb {N} \to \mathbb {N} }such thatσ(n)≠n{\displaystyle \sigma (n)\neq n}only for finitely manyn∈N{\displaystyle n\in \mathbb {N} }.
Each finite permutationσ{\displaystyle \sigma }acts measurably onXN{\displaystyle X^{\mathbb {N} }}by permuting the components, and so we have an action of the countable groupS∞{\displaystyle S_{\infty }}onXN{\displaystyle X^{\mathbb {N} }}.
An invariant event for this sigma-algebra is often called anexchangeable eventorsymmetric event, and the sigma-algebra of invariant events is often called theexchangeable sigma-algebra.
Arandom variableonXN{\displaystyle X^{\mathbb {N} }}is exchangeable (i.e. permutation-invariant) if and only if it is measurable for the exchangeable sigma-algebra.
The exchangeable sigma-algebra plays a role in theHewitt-Savage zero-one law, which can be equivalently stated by saying that for everyprobability measurep{\displaystyle p}on(X,A){\displaystyle (X,{\mathcal {A}})}, theproduct measurep⊗N{\displaystyle p^{\otimes \mathbb {N} }}onXN{\displaystyle X^{\mathbb {N} }}assigns to each exchangeable event probability either zero or one.[9]Equivalently, for the measurep⊗N{\displaystyle p^{\otimes \mathbb {N} }}, every exchangeable random variable onXN{\displaystyle X^{\mathbb {N} }}is almost surely constant.
It also plays a role in thede Finetti theorem.[9]
As in the example above, given a measurable space(X,A){\displaystyle (X,{\mathcal {A}})}, consider the countably infinite cartesian product(XN,A⊗N){\displaystyle (X^{\mathbb {N} },{\mathcal {A}}^{\otimes \mathbb {N} })}.
Consider now theshiftmapT:XN→XN{\displaystyle T:X^{\mathbb {N} }\to X^{\mathbb {N} }}given by mapping(x0,x1,x2,…)∈XN{\displaystyle (x_{0},x_{1},x_{2},\dots )\in X^{\mathbb {N} }}to(x1,x2,x3,…)∈XN{\displaystyle (x_{1},x_{2},x_{3},\dots )\in X^{\mathbb {N} }}.
An invariant event for this sigma-algebra is called ashift-invariant event, and the resulting sigma-algebra is sometimes called theshift-invariant sigma-algebra.
This sigma-algebra is related to the one oftail events, which is given by the following intersection,
whereAm⊆A⊗N{\displaystyle {\mathcal {A}}_{m}\subseteq {\mathcal {A}}^{\otimes \mathbb {N} }}is the sigma-algebra induced onXN{\displaystyle X^{\mathbb {N} }}by the projection on them{\displaystyle m}-th componentπm:(XN,A⊗N)→(X,A){\displaystyle \pi _{m}:(X^{\mathbb {N} },{\mathcal {A}}^{\otimes \mathbb {N} })\to (X,{\mathcal {A}})}.
Every shift-invariant event is a tail event, but the converse is not true.
|
https://en.wikipedia.org/wiki/Invariant_sigma-algebra
|
SigSpec(acronym ofSIGnificance SPECtrum) is a statistical technique to provide the reliability of periodicities in a measured (noisy and not necessarily equidistant)time series.[1]It relies on the amplitudespectrumobtained by theDiscrete Fourier transform(DFT) and assigns a quantity called thespectral significance(frequently abbreviated by “sig”) to eachamplitude. This quantity is alogarithmicmeasure of the probability that the given amplitude level would be seen inwhite noise, in the sense of atype I error. It represents the answer to the question, “What would be the chance to obtain an amplitude like the measured one or higher, if the analysed time series wererandom?”
SigSpec may be considered a formal extension to theLomb-Scargle periodogram,[2][3]appropriately incorporating a time series to be averaged to zero before applying the DFT, which is done in many practical applications. When a zero-mean corrected dataset has to be statistically compared to arandom sample, thesample mean(rather than thepopulation meanonly) has to be zero.
Considering a time series to be represented by a set ofK{\displaystyle K}pairs(tk,xk){\displaystyle (t_{k},x_{k})}, the amplitudepdfof white noise inFourier space, depending onfrequencyandphaseangle may be described in terms of three parameters,α0{\displaystyle \alpha _{0}},β0{\displaystyle \beta _{0}},θ0{\displaystyle \theta _{0}}, defining the “sampling profile”, according to
In terms of the phase angle in Fourier space,θ{\displaystyle \theta }, with
the probability density of amplitudes is given by
where the sock function is defined by
and<x2>{\displaystyle <x^{2}>}denotes thevarianceof thedependent variablexk{\displaystyle x_{k}}.
Integration of the pdf yields the false-alarm probability that white noise in thetime domainproduces an amplitude of at leastA{\displaystyle A},
The sig is defined as the negative logarithm of the false-alarm probability and evaluates to
It returns the number of random time series one would have to examine to obtain one amplitude exceedingA{\displaystyle A}at the given frequency and phase.
SigSpec is primarily used inasteroseismologyto identifyvariable starsand to classify stellar pulsation (see references below). The fact that this method incorporates the properties of the time-domain sampling appropriately makes it a valuable tool for typical astronomical measurements containing data gaps.
|
https://en.wikipedia.org/wiki/SigSpec
|
Inmathematicsandmultivariate statistics, thecentering matrix[1]is asymmetricandidempotent matrix, which when multiplied with a vector has the same effect as subtracting themeanof the components of the vector from every component of that vector.
Thecentering matrixof sizenis defined as then-by-nmatrix
whereIn{\displaystyle I_{n}\,}is theidentity matrixof sizenandJn{\displaystyle J_{n}}is ann-by-nmatrix of all 1's.
For example
Given a column-vector,v{\displaystyle \mathbf {v} \,}of sizen, thecentering propertyofCn{\displaystyle C_{n}\,}can be expressed as
whereJn,1{\displaystyle J_{n,1}}is acolumn vector of onesand1nJn,1Tv{\displaystyle {\tfrac {1}{n}}J_{n,1}^{\textrm {T}}\mathbf {v} }is the mean of the components ofv{\displaystyle \mathbf {v} \,}.
Cn{\displaystyle C_{n}\,}is symmetricpositive semi-definite.
Cn{\displaystyle C_{n}\,}isidempotent, so thatCnk=Cn{\displaystyle C_{n}^{k}=C_{n}}, fork=1,2,…{\displaystyle k=1,2,\ldots }. Once the mean has been removed, it is zero and removing it again has no effect.
Cn{\displaystyle C_{n}\,}issingular. The effects of applying the transformationCnv{\displaystyle C_{n}\,\mathbf {v} }cannot be reversed.
Cn{\displaystyle C_{n}\,}has theeigenvalue1 of multiplicityn− 1 and eigenvalue 0 of multiplicity 1.
Cn{\displaystyle C_{n}\,}has anullspaceof dimension 1, along the vectorJn,1{\displaystyle J_{n,1}}.
Cn{\displaystyle C_{n}\,}is anorthogonal projection matrix. That is,Cnv{\displaystyle C_{n}\mathbf {v} }is a projection ofv{\displaystyle \mathbf {v} \,}onto the (n− 1)-dimensionalsubspacethat is orthogonal to the nullspaceJn,1{\displaystyle J_{n,1}}. (This is the subspace of alln-vectors whose components sum to zero.)
The trace ofCn{\displaystyle C_{n}}isn(n−1)/n=n−1{\displaystyle n(n-1)/n=n-1}.
Although multiplication by the centering matrix is not a computationally efficient way of removing the mean from a vector, it is a convenient analytical tool. It can be used not only to remove the mean of a single vector, but also of multiple vectors stored in the rows or columns of anm-by-nmatrixX{\displaystyle X}.
The left multiplication byCm{\displaystyle C_{m}}subtracts a corresponding mean value from each of thencolumns, so that each column of the productCmX{\displaystyle C_{m}\,X}has a zero mean. Similarly, the multiplication byCn{\displaystyle C_{n}}on the right subtracts a corresponding mean value from each of themrows, and each row of the productXCn{\displaystyle X\,C_{n}}has a zero mean.
The multiplication on both sides creates a doubly centred matrixCmXCn{\displaystyle C_{m}\,X\,C_{n}}, whose row and column means are equal to zero.
The centering matrix provides in particular a succinct way to express thescatter matrix,S=(X−μJn,1T)(X−μJn,1T)T{\displaystyle S=(X-\mu J_{n,1}^{\mathrm {T} })(X-\mu J_{n,1}^{\mathrm {T} })^{\mathrm {T} }}of a data sampleX{\displaystyle X\,}, whereμ=1nXJn,1{\displaystyle \mu ={\tfrac {1}{n}}XJ_{n,1}}is thesample mean. The centering matrix allows us to express the scatter matrix more compactly as
Cn{\displaystyle C_{n}}is thecovariance matrixof themultinomial distribution, in the special case where the parameters of that distribution arek=n{\displaystyle k=n}, andp1=p2=⋯=pn=1n{\displaystyle p_{1}=p_{2}=\cdots =p_{n}={\frac {1}{n}}}.
|
https://en.wikipedia.org/wiki/Centering_matrix
|
Dykstra's algorithmis a method that computes a point in the intersection ofconvex sets, and is a variant of thealternating projectionmethod (also called theprojections onto convex setsmethod). In its simplest form, the method finds a point in the intersection of two convex sets by iteratively projecting onto each of the convex set; it differs from the alternating projection method in that there are intermediate steps. A parallel version of the algorithm was developed by Gaffke and Mathar.
The method is named after Richard L. Dykstra who proposed it in the 1980s.
A key difference between Dykstra's algorithm and the standard alternating projection method occurs when there is more than one point in the intersection of the two sets. In this case, the alternating projection method gives some arbitrary point in this intersection, whereas Dykstra's algorithm gives a specific point: the projection ofronto the intersection, whereris the initial point used in the algorithm,
Dykstra's algorithm finds for eachr{\displaystyle r}the onlyx¯∈C∩D{\displaystyle {\bar {x}}\in C\cap D}such that:
whereC,D{\displaystyle C,D}areconvex sets. This problem is equivalent to finding theprojectionofr{\displaystyle r}onto the setC∩D{\displaystyle C\cap D}, which we denote byPC∩D{\displaystyle {\mathcal {P}}_{C\cap D}}.
To use Dykstra's algorithm, one must know how to project onto the setsC{\displaystyle C}andD{\displaystyle D}separately.
First, consider the basicalternating projection(aka POCS) method (first studied, in the case when the setsC,D{\displaystyle C,D}were linear subspaces, byJohn von Neumann[1]), which initializesx0=r{\displaystyle x_{0}=r}and then generates the sequence
Dykstra's algorithm is of a similar form, but uses additional auxiliary variables. Start withx0=r,p0=q0=0{\displaystyle x_{0}=r,p_{0}=q_{0}=0}and update by
Then the sequence(xk){\displaystyle (x_{k})}converges to the solution of the original problem. For convergence results and a modern perspective on the literature, see.[2]
|
https://en.wikipedia.org/wiki/Dykstra%27s_projection_algorithm
|
Inmathematics, aninvariant subspaceof alinear mappingT:V→Vi.e. from somevector spaceVto itself, is asubspaceWofVthat is preserved byT. More generally, an invariant subspace for a collection of linear mappings is a subspace preserved by each mapping individually.
Consider a vector spaceV{\displaystyle V}and a linear mapT:V→V.{\displaystyle T:V\to V.}A subspaceW⊆V{\displaystyle W\subseteq V}is called aninvariant subspace forT{\displaystyle T}, or equivalently,T-invariant, ifTtransforms any vectorv∈W{\displaystyle \mathbf {v} \in W}back intoW. In formulas, this can be writtenv∈W⟹T(v)∈W{\displaystyle \mathbf {v} \in W\implies T(\mathbf {v} )\in W}or[1]TW⊆W.{\displaystyle TW\subseteq W{\text{.}}}
In this case,Trestrictsto anendomorphismofW:[2]T|W:W→W;T|W(w)=T(w).{\displaystyle T|_{W}:W\to W{\text{;}}\quad T|_{W}(\mathbf {w} )=T(\mathbf {w} ){\text{.}}}
The existence of an invariant subspace also has amatrix formulation. Pick abasisCforWand complete it to a basisBofV. With respect toB, the operatorThas formT=[T|WT120T22]{\displaystyle T={\begin{bmatrix}T|_{W}&T_{12}\\0&T_{22}\end{bmatrix}}}for someT12andT22, whereT|W{\displaystyle T|_{W}}here denotes the matrix ofT|W{\displaystyle T|_{W}}with respect to the basisC.
Any linear mapT:V→V{\displaystyle T:V\to V}admits the following invariant subspaces:
These are the improper and trivial invariant subspaces, respectively. Certain linear operators have no proper non-trivial invariant subspace: for instance,rotationof a two-dimensionalrealvector space. However, theaxisof a rotation in three dimensions is always an invariant subspace.
IfUis a 1-dimensional invariant subspace for operatorTwith vectorv∈U, then the vectorsvandTvmust belinearly dependent. Thus∀v∈U∃α∈R:Tv=αv.{\displaystyle \forall \mathbf {v} \in U\;\exists \alpha \in \mathbb {R} :T\mathbf {v} =\alpha \mathbf {v} {\text{.}}}In fact, the scalarαdoes not depend onv.
The equation above formulates aneigenvalueproblem. AnyeigenvectorforTspans a 1-dimensional invariant subspace, and vice-versa. In particular, a nonzeroinvariant vector(i.e. afixed pointofT) spans an invariant subspace of dimension 1.
As a consequence of thefundamental theorem of algebra, every linear operator on a nonzerofinite-dimensionalcomplexvector space has an eigenvector. Therefore, every such linear operator in at least two dimensions has a proper non-trivial invariant subspace.
Determining whether a given subspaceWis invariant underTis ostensibly a problem of geometric nature. Matrix representation allows one to phrase this problem algebraically.
WriteVas thedirect sumW⊕W′; a suitableW′can always be chosen by extending a basis ofW. The associatedprojection operatorPontoWhas matrix representation
A straightforward calculation shows thatWisT-invariant if and only ifPTP=TP.
If 1 is theidentity operator, then1-Pis projection ontoW′. The equationTP=PTholds if and only if both im(P) and im(1 −P) are invariant underT. In that case,Thas matrix representationT=[T1100T22]:im(P)⊕im(1−P)→im(P)⊕im(1−P).{\displaystyle T={\begin{bmatrix}T_{11}&0\\0&T_{22}\end{bmatrix}}:{\begin{matrix}\operatorname {im} (P)\\\oplus \\\operatorname {im} (1-P)\end{matrix}}\rightarrow {\begin{matrix}\operatorname {im} (P)\\\oplus \\\operatorname {im} (1-P)\end{matrix}}\;.}
Colloquially, a projection that commutes withT"diagonalizes"T.
As the above examples indicate, the invariant subspaces of a given linear transformationTshed light on the structure ofT. WhenVis a finite-dimensional vector space over analgebraically closed field, linear transformations acting onVare characterized (up to similarity) by theJordan canonical form, which decomposesVinto invariant subspaces ofT. Many fundamental questions regardingTcan be translated to questions about invariant subspaces ofT.
The set ofT-invariant subspaces ofVis sometimes called theinvariant-subspace latticeofTand writtenLat(T). As the name suggests, it is a (modular)lattice, withmeets and joinsgiven by (respectively)set intersectionandlinear span. Aminimal elementinLat(T)in said to be aminimal invariant subspace.
In the study of infinite-dimensional operators,Lat(T)is sometimes restricted to only theclosedinvariant subspaces.
Given a collectionTof operators, a subspace is calledT-invariant if it is invariant under eachT∈T.
As in the single-operator case, the invariant-subspace lattice ofT, writtenLat(T), is the set of allT-invariant subspaces, and bears the same meet and join operations. Set-theoretically, it is the intersectionLat(T)=⋂T∈TLat(T).{\displaystyle \mathrm {Lat} ({\mathcal {T}})=\bigcap _{T\in {\mathcal {T}}}{\mathrm {Lat} (T)}{\text{.}}}
LetEnd(V)be the set of all linear operators onV. ThenLat(End(V))={0,V}.
Given arepresentationof agroupGon a vector spaceV, we have a linear transformationT(g) :V→Vfor every elementgofG. If a subspaceWofVis invariant with respect to all these transformations, then it is asubrepresentationand the groupGacts onWin a natural way. The same construction applies torepresentations of an algebra.
As another example, letT∈ End(V)andΣbe the algebra generated by {1,T}, where 1 is the identity operator. Then Lat(T) = Lat(Σ).
Just as the fundamental theorem of algebra ensures that every linear transformation acting on a finite-dimensional complex vector space has a non-trivial invariant subspace, thefundamental theorem of noncommutative algebraasserts that Lat(Σ) contains non-trivial elements for certain Σ.
Theorem(Burnside)—AssumeVis a complex vector space of finite dimension. For every proper subalgebraΣofEnd(V),Lat(Σ)contains a non-trivial element.
One consequence is that every commuting family inL(V) can be simultaneouslyupper-triangularized. To see this, note that an upper-triangular matrix representation corresponds to aflagof invariant subspaces, that a commuting family generates a commuting algebra, and thatEnd(V)is not commutative whendim(V) ≥ 2.
IfAis analgebra, one can define aleft regular representationΦ onA: Φ(a)b=abis ahomomorphismfromAtoL(A), the algebra of linear transformations onA
The invariant subspaces of Φ are precisely the left ideals ofA. A left idealMofAgives a subrepresentation ofAonM.
IfMis a leftidealofAthen the left regular representation Φ onMnow descends to a representation Φ' on thequotient vector spaceA/M. If [b] denotes anequivalence classinA/M, Φ'(a)[b] = [ab]. The kernel of the representation Φ' is the set {a∈A|ab∈Mfor allb}.
The representation Φ' isirreducibleif and only ifMis amaximalleft ideal, since a subspaceV⊂A/Mis an invariant under {Φ'(a) |a∈A} if and only if itspreimageunder thequotient map,V+M, is a left ideal inA.
The invariant subspace problem concerns the case whereVis a separableHilbert spaceover thecomplex numbers, of dimension > 1, andTis abounded operator. The problem is to decide whether every suchThas a non-trivial, closed, invariant subspace. It is unsolved.
In the more general case whereVis assumed to be aBanach space,Per Enflo(1976) found an example of an operator without an invariant subspace. A concrete example of an operator without an invariant subspace was produced in 1985 byCharles Read.
Related to invariant subspaces are so-called almost-invariant-halfspaces (AIHS's). A closed subspaceY{\displaystyle Y}of a Banach spaceX{\displaystyle X}is said to bealmost-invariantunder an operatorT∈B(X){\displaystyle T\in {\mathcal {B}}(X)}ifTY⊆Y+E{\displaystyle TY\subseteq Y+E}for some finite-dimensional subspaceE{\displaystyle E}; equivalently,Y{\displaystyle Y}is almost-invariant underT{\displaystyle T}if there is afinite-rank operatorF∈B(X){\displaystyle F\in {\mathcal {B}}(X)}such that(T+F)Y⊆Y{\displaystyle (T+F)Y\subseteq Y}, i.e. ifY{\displaystyle Y}is invariant (in the usual sense) underT+F{\displaystyle T+F}. In this case, the minimum possible dimension ofE{\displaystyle E}(or rank ofF{\displaystyle F}) is called thedefect.
Clearly, every finite-dimensional and finite-codimensional subspace is almost-invariant under every operator. Thus, to make things non-trivial, we say thatY{\displaystyle Y}is a halfspace whenever it is a closed subspace with infinite dimension and infinite codimension.
The AIHS problem asks whether every operator admits an AIHS. In the complex setting it has already been solved; that is, ifX{\displaystyle X}is a complex infinite-dimensional Banach space andT∈B(X){\displaystyle T\in {\mathcal {B}}(X)}thenT{\displaystyle T}admits an AIHS of defect at most 1. It is not currently known whether the same holds ifX{\displaystyle X}is a real Banach space. However, some partial results have been established: for instance, anyself-adjoint operatoron an infinite-dimensional real Hilbert space admits an AIHS, as does any strictly singular (or compact) operator acting on a real infinite-dimensional reflexive space.
|
https://en.wikipedia.org/wiki/Invariant_subspace
|
Inlinear algebra,orthogonalizationis the process of finding asetoforthogonal vectorsthatspana particularsubspace. Formally, starting with alinearly independentset of vectors {v1, ... ,vk} in aninner product space(most commonly theEuclidean spaceRn), orthogonalization results in a set oforthogonalvectors {u1, ... ,uk} thatgeneratethe same subspace as the vectorsv1, ... ,vk. Every vector in the new set is orthogonal to every other vector in the new set; and the new set and the old set have the samelinear span.
In addition, if we want the resulting vectors to all beunit vectors, then wenormalizeeach vector and the procedure is calledorthonormalization.
Orthogonalization is also possible with respect to anysymmetric bilinear form(not necessarily an inner product, not necessarily overreal numbers), but standard algorithms may encounterdivision by zeroin this more general setting.
Methods for performing orthogonalization include:
When performing orthogonalization on a computer, the Householder transformation is usually preferred over the Gram–Schmidt process since it is morenumerically stable, i.e. rounding errors tend to have less serious effects.
On the other hand, the Gram–Schmidt process produces the jth orthogonalized vector after the jth iteration, while orthogonalization using Householder reflections produces all the vectors only at the end. This makes only the Gram–Schmidt process applicable foriterative methodslike theArnoldi iteration.
The Givens rotation is more easilyparallelizedthan Householder transformations.
Symmetric orthogonalization was formulated byPer-Olov Löwdin.[1]
To compensate for the loss of useful signal in traditional noise attenuation approaches because of incorrect parameter selection or inadequacy ofdenoisingassumptions, a weighting operator can be applied on the initially denoised section for the retrieval of useful signal from the initial noise section. The new denoising process is referred to as the local orthogonalization of signal and noise.[2]It has a wide range of applications in manysignals processingandseismic explorationfields.
|
https://en.wikipedia.org/wiki/Orthogonalization
|
Inlinear algebra, thetraceof asquare matrixA, denotedtr(A),[1]is the sum of the elements on itsmain diagonal,a11+a22+⋯+ann{\displaystyle a_{11}+a_{22}+\dots +a_{nn}}. It is only defined for a square matrix (n×n).
The trace of a matrix is the sum of itseigenvalues(counted with multiplicities). Also,tr(AB) = tr(BA)for any matricesAandBof the same size. Thus,similar matriceshave the same trace. As a consequence, one can define the trace of alinear operatormapping a finite-dimensionalvector spaceinto itself, since all matrices describing such an operator with respect to a basis are similar.
The trace is related to the derivative of thedeterminant(seeJacobi's formula).
Thetraceof ann×nsquare matrixAis defined as[1][2][3]: 34tr(A)=∑i=1naii=a11+a22+⋯+ann{\displaystyle \operatorname {tr} (\mathbf {A} )=\sum _{i=1}^{n}a_{ii}=a_{11}+a_{22}+\dots +a_{nn}}whereaiidenotes the entry on theithrow andithcolumn ofA. The entries ofAcan bereal numbers,complex numbers, or more generally elements of afieldF. The trace is not defined for non-square matrices.
LetAbe a matrix, withA=(a11a12a13a21a22a23a31a32a33)=(1031152612−5){\displaystyle \mathbf {A} ={\begin{pmatrix}a_{11}&a_{12}&a_{13}\\a_{21}&a_{22}&a_{23}\\a_{31}&a_{32}&a_{33}\end{pmatrix}}={\begin{pmatrix}1&0&3\\11&5&2\\6&12&-5\end{pmatrix}}}
Thentr(A)=∑i=13aii=a11+a22+a33=1+5+(−5)=1{\displaystyle \operatorname {tr} (\mathbf {A} )=\sum _{i=1}^{3}a_{ii}=a_{11}+a_{22}+a_{33}=1+5+(-5)=1}
The trace is alinear mapping. That is,[1][2]tr(A+B)=tr(A)+tr(B)tr(cA)=ctr(A){\displaystyle {\begin{aligned}\operatorname {tr} (\mathbf {A} +\mathbf {B} )&=\operatorname {tr} (\mathbf {A} )+\operatorname {tr} (\mathbf {B} )\\\operatorname {tr} (c\mathbf {A} )&=c\operatorname {tr} (\mathbf {A} )\end{aligned}}}for all square matricesAandB, and allscalarsc.[3]: 34
A matrix and itstransposehave the same trace:[1][2][3]: 34tr(A)=tr(AT).{\displaystyle \operatorname {tr} (\mathbf {A} )=\operatorname {tr} \left(\mathbf {A} ^{\mathsf {T}}\right).}
This follows immediately from the fact that transposing a square matrix does not affect elements along the main diagonal.
The trace of a square matrix which is the product of two matrices can be rewritten as the sum of entry-wise products of their elements, i.e. as the sum of all elements of theirHadamard product. Phrased directly, ifAandBare twom×nmatrices, then:tr(ATB)=tr(ABT)=tr(BTA)=tr(BAT)=∑i=1m∑j=1naijbij.{\displaystyle \operatorname {tr} \left(\mathbf {A} ^{\mathsf {T}}\mathbf {B} \right)=\operatorname {tr} \left(\mathbf {A} \mathbf {B} ^{\mathsf {T}}\right)=\operatorname {tr} \left(\mathbf {B} ^{\mathsf {T}}\mathbf {A} \right)=\operatorname {tr} \left(\mathbf {B} \mathbf {A} ^{\mathsf {T}}\right)=\sum _{i=1}^{m}\sum _{j=1}^{n}a_{ij}b_{ij}\;.}
If one views any realm×nmatrix as a vector of lengthmn(an operation calledvectorization) then the above operation onAandBcoincides with the standarddot product. According to the above expression,tr(A⊤A)is a sum of squares and hence is nonnegative, equal to zero if and only ifAis zero.[4]: 7Furthermore, as noted in the above formula,tr(A⊤B) = tr(B⊤A). These demonstrate the positive-definiteness and symmetry required of aninner product; it is common to calltr(A⊤B)theFrobenius inner productofAandB. This is a natural inner product on thevector spaceof all real matrices of fixed dimensions. Thenormderived from this inner product is called theFrobenius norm, and it satisfies a submultiplicative property, as can be proven with theCauchy–Schwarz inequality:0≤[tr(AB)]2≤tr(ATA)tr(BTB),{\displaystyle 0\leq \left[\operatorname {tr} (\mathbf {A} \mathbf {B} )\right]^{2}\leq \operatorname {tr} \left(\mathbf {A} ^{\mathsf {T}}\mathbf {A} \right)\operatorname {tr} \left(\mathbf {B} ^{\mathsf {T}}\mathbf {B} \right),}ifAandBare real matrices such thatABis a square matrix. The Frobenius inner product and norm arise frequently inmatrix calculusandstatistics.
The Frobenius inner product may be extended to ahermitian inner producton thecomplex vector spaceof all complex matrices of a fixed size, by replacingBby itscomplex conjugate.
The symmetry of the Frobenius inner product may be phrased more directly as follows: the matrices in the trace of a product can be switched without changing the result. IfAandBarem×nandn×mreal or complex matrices, respectively, then[1][2][3]: 34[note 1]
tr(AB)=tr(BA){\displaystyle \operatorname {tr} (\mathbf {A} \mathbf {B} )=\operatorname {tr} (\mathbf {B} \mathbf {A} )}
This is notable both for the fact thatABdoes not usually equalBA, and also since the trace of either does not usually equaltr(A)tr(B).[note 2]Thesimilarity-invarianceof the trace, meaning thattr(A) = tr(P−1AP)for any square matrixAand any invertible matrixPof the same dimensions, is a fundamental consequence. This is proved bytr(P−1(AP))=tr((AP)P−1)=tr(A).{\displaystyle \operatorname {tr} \left(\mathbf {P} ^{-1}(\mathbf {A} \mathbf {P} )\right)=\operatorname {tr} \left((\mathbf {A} \mathbf {P} )\mathbf {P} ^{-1}\right)=\operatorname {tr} (\mathbf {A} ).}Similarity invariance is the crucial property of the trace in order to discuss traces oflinear transformationsas below.
Additionally, for real column vectorsa∈Rn{\displaystyle \mathbf {a} \in \mathbb {R} ^{n}}andb∈Rn{\displaystyle \mathbf {b} \in \mathbb {R} ^{n}}, the trace of the outer product is equivalent to the inner product:
tr(baT)=aTb{\displaystyle \operatorname {tr} \left(\mathbf {b} \mathbf {a} ^{\textsf {T}}\right)=\mathbf {a} ^{\textsf {T}}\mathbf {b} }
More generally, the trace isinvariant undercircular shifts, that is,
tr(ABCD)=tr(BCDA)=tr(CDAB)=tr(DABC).{\displaystyle \operatorname {tr} (\mathbf {A} \mathbf {B} \mathbf {C} \mathbf {D} )=\operatorname {tr} (\mathbf {B} \mathbf {C} \mathbf {D} \mathbf {A} )=\operatorname {tr} (\mathbf {C} \mathbf {D} \mathbf {A} \mathbf {B} )=\operatorname {tr} (\mathbf {D} \mathbf {A} \mathbf {B} \mathbf {C} ).}
This is known as thecyclic property.
Arbitrary permutations are not allowed: in general,tr(ABCD)≠tr(ACBD).{\displaystyle \operatorname {tr} (\mathbf {A} \mathbf {B} \mathbf {C} \mathbf {D} )\neq \operatorname {tr} (\mathbf {A} \mathbf {C} \mathbf {B} \mathbf {D} )~.}
However, if products ofthreesymmetricmatrices are considered, any permutation is allowed, since:tr(ABC)=tr((ABC)T)=tr(CBA)=tr(ACB),{\displaystyle \operatorname {tr} (\mathbf {A} \mathbf {B} \mathbf {C} )=\operatorname {tr} \left(\left(\mathbf {A} \mathbf {B} \mathbf {C} \right)^{\mathsf {T}}\right)=\operatorname {tr} (\mathbf {C} \mathbf {B} \mathbf {A} )=\operatorname {tr} (\mathbf {A} \mathbf {C} \mathbf {B} ),}where the first equality is because the traces of a matrix and its transpose are equal. Note that this is not true in general for more than three factors.
The trace of theKronecker productof two matrices is the product of their traces:tr(A⊗B)=tr(A)tr(B).{\displaystyle \operatorname {tr} (\mathbf {A} \otimes \mathbf {B} )=\operatorname {tr} (\mathbf {A} )\operatorname {tr} (\mathbf {B} ).}
The following three properties:tr(A+B)=tr(A)+tr(B),tr(cA)=ctr(A),tr(AB)=tr(BA),{\displaystyle {\begin{aligned}\operatorname {tr} (\mathbf {A} +\mathbf {B} )&=\operatorname {tr} (\mathbf {A} )+\operatorname {tr} (\mathbf {B} ),\\\operatorname {tr} (c\mathbf {A} )&=c\operatorname {tr} (\mathbf {A} ),\\\operatorname {tr} (\mathbf {A} \mathbf {B} )&=\operatorname {tr} (\mathbf {B} \mathbf {A} ),\end{aligned}}}characterize the traceup toa scalar multiple in the following sense: Iff{\displaystyle f}is alinear functionalon the space of square matrices that satisfiesf(xy)=f(yx),{\displaystyle f(xy)=f(yx),}thenf{\displaystyle f}andtr{\displaystyle \operatorname {tr} }are proportional.[note 3]
Forn×n{\displaystyle n\times n}matrices, imposing the normalizationf(I)=n{\displaystyle f(\mathbf {I} )=n}makesf{\displaystyle f}equal to the trace.
Given anyn×nmatrixA, there is
tr(A)=∑i=1nλi{\displaystyle \operatorname {tr} (\mathbf {A} )=\sum _{i=1}^{n}\lambda _{i}}
whereλ1, ..., λnare theeigenvaluesofAcounted with multiplicity. This holds true even ifAis a real matrix and some (or all) of the eigenvalues are complex numbers. This may be regarded as a consequence of the existence of theJordan canonical form, together with the similarity-invariance of the trace discussed above.
When bothAandBaren×nmatrices, the trace of the (ring-theoretic)commutatorofAandBvanishes:tr([A,B]) = 0, becausetr(AB) = tr(BA)andtris linear. One can state this as "the trace is a map ofLie algebrasgln→kfrom operators to scalars", as the commutator of scalars is trivial (it is anAbelian Lie algebra). In particular, using similarity invariance, it follows that the identity matrix is never similar to the commutator of any pair of matrices.
Conversely, any square matrix with zero trace is a linear combination of the commutators of pairs of matrices.[note 4]Moreover, any square matrix with zero trace isunitarily equivalentto a square matrix with diagonal consisting of all zeros.
tr(In)=n{\displaystyle \operatorname {tr} \left(\mathbf {I} _{n}\right)=n}
When the characteristic of the base field is zero, the converse also holds: iftr(Ak) = 0for allk, thenAis nilpotent.
The trace of ann×n{\displaystyle n\times n}matrixA{\displaystyle A}is the coefficient oftn−1{\displaystyle t^{n-1}}in thecharacteristic polynomial, possibly changed of sign, according to the convention in the definition of the characteristic polynomial.
IfAis a linear operator represented by a square matrix withrealorcomplexentries and ifλ1, ...,λnare theeigenvaluesofA(listed according to theiralgebraic multiplicities), then
tr(A)=∑iλi{\displaystyle \operatorname {tr} (\mathbf {A} )=\sum _{i}\lambda _{i}}
This follows from the fact thatAis alwayssimilarto itsJordan form, an uppertriangular matrixhavingλ1, ...,λnon the main diagonal. In contrast, thedeterminantofAis theproductof its eigenvalues; that is,det(A)=∏iλi.{\displaystyle \det(\mathbf {A} )=\prod _{i}\lambda _{i}.}
Everything in the present section applies as well to any square matrix with coefficients in analgebraically closed field.
IfΔAis a square matrix with small entries andIdenotes theidentity matrix, then we have approximately
det(I+ΔA)≈1+tr(ΔA).{\displaystyle \det(\mathbf {I} +\mathbf {\Delta A} )\approx 1+\operatorname {tr} (\mathbf {\Delta A} ).}
Precisely this means that the trace is thederivativeof thedeterminantfunction at the identity matrix.Jacobi's formula
ddet(A)=tr(adj(A)⋅dA){\displaystyle d\det(\mathbf {A} )=\operatorname {tr} {\big (}\operatorname {adj} (\mathbf {A} )\cdot d\mathbf {A} {\big )}}
is more general and describes thedifferentialof the determinant at an arbitrary square matrix, in terms of the trace and theadjugateof the matrix.
From this (or from the connection between the trace and the eigenvalues), one can derive a relation between the trace function, thematrix exponentialfunction, and the determinant:det(exp(A))=exp(tr(A)).{\displaystyle \det(\exp(\mathbf {A} ))=\exp(\operatorname {tr} (\mathbf {A} )).}
A related characterization of the trace applies to linearvector fields. Given a matrixA, define a vector fieldFonRnbyF(x) =Ax. The components of this vector field are linear functions (given by the rows ofA). ItsdivergencedivFis a constant function, whose value is equal totr(A).
By thedivergence theorem, one can interpret this in terms of flows: ifF(x)represents the velocity of a fluid at locationxandUis a region inRn, thenet flowof the fluid out ofUis given bytr(A) · vol(U), wherevol(U)is thevolumeofU.
The trace is a linear operator, hence it commutes with the derivative:dtr(X)=tr(dX).{\displaystyle d\operatorname {tr} (\mathbf {X} )=\operatorname {tr} (d\mathbf {X} ).}
In general, given some linear mapf:V→V(whereVis a finite-dimensionalvector space), we can define the trace of this map by considering the trace of amatrix representationoff, that is, choosing abasisforVand describingfas a matrix relative to this basis, and taking the trace of this square matrix. The result will not depend on the basis chosen, since different bases will give rise tosimilar matrices, allowing for the possibility of a basis-independent definition for the trace of a linear map.
Such a definition can be given using thecanonical isomorphismbetween the spaceEnd(V)of linear maps onVandV⊗V*, whereV*is thedual spaceofV. Letvbe inVand letgbe inV*. Then the trace of the indecomposable elementv⊗gis defined to beg(v); the trace of a general element is defined by linearity. The trace of a linear mapf:V→Vcan then be defined as the trace, in the above sense, of the element ofV⊗V*corresponding tofunder the above mentioned canonical isomorphism. Using an explicit basis forVand the corresponding dual basis forV*, one can show that this gives the same definition of the trace as given above.
The trace can be estimated unbiasedly by "Hutchinson's trick":[5]
Given any matrixW∈Rn×n{\displaystyle {\boldsymbol {W}}\in \mathbb {R} ^{n\times n}}, and any randomu∈Rn{\displaystyle {\boldsymbol {u}}\in \mathbb {R} ^{n}}withE[uu⊺]=I{\displaystyle \mathbb {E} [{\boldsymbol {u}}{\boldsymbol {u}}^{\intercal }]=\mathbf {I} }, we haveE[u⊺Wu]=trW{\displaystyle \mathbb {E} [{\boldsymbol {u}}^{\intercal }{\boldsymbol {W}}{\boldsymbol {u}}]=\operatorname {tr} {\boldsymbol {W}}}.
For a proof expand the expectation directly.
Usually, the random vector is sampled fromN(0,I){\displaystyle \operatorname {N} (\mathbf {0} ,\mathbf {I} )}(normal distribution) or{±n−1/2}n{\displaystyle \{\pm n^{-1/2}\}^{n}}(Rademacher distribution).
More sophisticated stochastic estimators of trace have been developed.[6]
If a 2 x 2 real matrix has zero trace, its square is adiagonal matrix.
The trace of a 2 × 2complex matrixis used to classifyMöbius transformations. First, the matrix is normalized to make itsdeterminantequal to one. Then, if the square of the trace is 4, the corresponding transformation isparabolic. If the square is in the interval[0,4), it iselliptic. Finally, if the square is greater than 4, the transformation isloxodromic. Seeclassification of Möbius transformations.
The trace is used to definecharactersofgroup representations. Two representationsA,B:G→GL(V)of a groupGare equivalent (up to change of basis onV) iftr(A(g)) = tr(B(g))for allg∈G.
The trace also plays a central role in the distribution ofquadratic forms.
The trace is a map of Lie algebrastr:gln→K{\displaystyle \operatorname {tr} :{\mathfrak {gl}}_{n}\to K}from the Lie algebragln{\displaystyle {\mathfrak {gl}}_{n}}of linear operators on ann-dimensional space (n×nmatrices with entries inK{\displaystyle K}) to the Lie algebraKof scalars; asKis Abelian (the Lie bracket vanishes), the fact that this is a map of Lie algebras is exactly the statement that the trace of a bracket vanishes:tr([A,B])=0for eachA,B∈gln.{\displaystyle \operatorname {tr} ([\mathbf {A} ,\mathbf {B} ])=0{\text{ for each }}\mathbf {A} ,\mathbf {B} \in {\mathfrak {gl}}_{n}.}
The kernel of this map, a matrix whose trace iszero, is often said to betracelessortrace free, and these matrices form thesimple Lie algebrasln{\displaystyle {\mathfrak {sl}}_{n}}, which is theLie algebraof thespecial linear groupof matrices with determinant 1. The special linear group consists of the matrices which do not change volume, while thespecial linear Lie algebrais the matrices which do not alter volume ofinfinitesimalsets.
In fact, there is an internaldirect sumdecompositiongln=sln⊕K{\displaystyle {\mathfrak {gl}}_{n}={\mathfrak {sl}}_{n}\oplus K}of operators/matrices into traceless operators/matrices and scalars operators/matrices. The projection map onto scalar operators can be expressed in terms of the trace, concretely as:A↦1ntr(A)I.{\displaystyle \mathbf {A} \mapsto {\frac {1}{n}}\operatorname {tr} (\mathbf {A} )\mathbf {I} .}
Formally, one can compose the trace (thecounitmap) with the unit mapK→gln{\displaystyle K\to {\mathfrak {gl}}_{n}}of "inclusion ofscalars" to obtain a mapgln→gln{\displaystyle {\mathfrak {gl}}_{n}\to {\mathfrak {gl}}_{n}}mapping onto scalars, and multiplying byn. Dividing bynmakes this a projection, yielding the formula above.
In terms ofshort exact sequences, one has0→sln→gln→trK→0{\displaystyle 0\to {\mathfrak {sl}}_{n}\to {\mathfrak {gl}}_{n}{\overset {\operatorname {tr} }{\to }}K\to 0}which is analogous to1→SLn→GLn→detK∗→1{\displaystyle 1\to \operatorname {SL} _{n}\to \operatorname {GL} _{n}{\overset {\det }{\to }}K^{*}\to 1}(whereK∗=K∖{0}{\displaystyle K^{*}=K\setminus \{0\}}) forLie groups. However, the trace splits naturally (via1/n{\displaystyle 1/n}times scalars) sogln=sln⊕K{\displaystyle {\mathfrak {gl}}_{n}={\mathfrak {sl}}_{n}\oplus K}, but the splitting of the determinant would be as thenth root times scalars, and this does not in general define a function, so the determinant does not split and the general linear group does not decompose:GLn≠SLn×K∗.{\displaystyle \operatorname {GL} _{n}\neq \operatorname {SL} _{n}\times K^{*}.}
Thebilinear form(whereX,Yare square matrices)B(X,Y)=tr(ad(X)ad(Y)){\displaystyle B(\mathbf {X} ,\mathbf {Y} )=\operatorname {tr} (\operatorname {ad} (\mathbf {X} )\operatorname {ad} (\mathbf {Y} ))}
B(X,Y){\displaystyle B(\mathbf {X} ,\mathbf {Y} )}is called theKilling form; it is used to classifyLie algebras.
The trace defines a bilinear form:(X,Y)↦tr(XY).{\displaystyle (\mathbf {X} ,\mathbf {Y} )\mapsto \operatorname {tr} (\mathbf {X} \mathbf {Y} )~.}
The form is symmetric, non-degenerate[note 5]and associative in the sense that:tr(X[Y,Z])=tr([X,Y]Z).{\displaystyle \operatorname {tr} (\mathbf {X} [\mathbf {Y} ,\mathbf {Z} ])=\operatorname {tr} ([\mathbf {X} ,\mathbf {Y} ]\mathbf {Z} ).}
For a complex simple Lie algebra (such assl{\displaystyle {\mathfrak {sl}}}n), every such bilinear form is proportional to each other; in particular, to the Killing form[citation needed].
Two matricesXandYare said to betrace orthogonaliftr(XY)=0.{\displaystyle \operatorname {tr} (\mathbf {X} \mathbf {Y} )=0.}
There is a generalization to a general representation(ρ,g,V){\displaystyle (\rho ,{\mathfrak {g}},V)}of a Lie algebrag{\displaystyle {\mathfrak {g}}}, such thatρ{\displaystyle \rho }is a homomorphism of Lie algebrasρ:g→End(V).{\displaystyle \rho :{\mathfrak {g}}\rightarrow {\text{End}}(V).}The trace formtrV{\displaystyle {\text{tr}}_{V}}onEnd(V){\displaystyle {\text{End}}(V)}is defined as above. The bilinear formϕ(X,Y)=trV(ρ(X)ρ(Y)){\displaystyle \phi (\mathbf {X} ,\mathbf {Y} )={\text{tr}}_{V}(\rho (\mathbf {X} )\rho (\mathbf {Y} ))}is symmetric and invariant due to cyclicity.
The concept of trace of a matrix is generalized to thetrace classofcompact operatorsonHilbert spaces, and the analog of theFrobenius normis called theHilbert–Schmidtnorm.
IfK{\displaystyle K}is a trace-class operator, then for anyorthonormal basis{en}n=1{\displaystyle \{e_{n}\}_{n=1}}, the trace is given bytr(K)=∑n⟨en,Ken⟩,{\displaystyle \operatorname {tr} (K)=\sum _{n}\left\langle e_{n},Ke_{n}\right\rangle ,}and is finite and independent of the orthonormal basis.[7]
Thepartial traceis another generalization of the trace that is operator-valued. The trace of a linear operatorZ{\displaystyle Z}which lives on a product spaceA⊗B{\displaystyle A\otimes B}is equal to the partial traces overA{\displaystyle A}andB{\displaystyle B}:tr(Z)=trA(trB(Z))=trB(trA(Z)).{\displaystyle \operatorname {tr} (Z)=\operatorname {tr} _{A}\left(\operatorname {tr} _{B}(Z)\right)=\operatorname {tr} _{B}\left(\operatorname {tr} _{A}(Z)\right).}
For more properties and a generalization of the partial trace, seetraced monoidal categories.
IfA{\displaystyle A}is a generalassociative algebraover a fieldk{\displaystyle k}, then a trace onA{\displaystyle A}is often defined to be anyfunctionaltr:A→k{\displaystyle \operatorname {tr} :A\to k}which vanishes on commutators;tr([a,b])=0{\displaystyle \operatorname {tr} ([a,b])=0}for alla,b∈A{\displaystyle a,b\in A}. Such a trace is not uniquely defined; it can always at least be modified by multiplication by a nonzero scalar.
Asupertraceis the generalization of a trace to the setting ofsuperalgebras.
The operation oftensor contractiongeneralizes the trace to arbitrary tensors.
Gomme and Klein (2011) define a matrix trace operatortrm{\displaystyle \operatorname {trm} }that operates onblock matricesand use it to compute second-order perturbation solutions to dynamic economic models without the need fortensor notation.[8]
Given a vector spaceV, there is a natural bilinear mapV×V∗→Fgiven by sending(v, φ)to the scalarφ(v). Theuniversal propertyof thetensor productV⊗V∗automatically implies that this bilinear map is induced by a linear functional onV⊗V∗.[9]
Similarly, there is a natural bilinear mapV×V∗→ Hom(V,V)given by sending(v, φ)to the linear mapw↦ φ(w)v. The universal property of the tensor product, just as used previously, says that this bilinear map is induced by a linear mapV⊗V∗→ Hom(V,V). IfVis finite-dimensional, then this linear map is alinear isomorphism.[9]This fundamental fact is a straightforward consequence of the existence of a (finite) basis ofV, and can also be phrased as saying that any linear mapV→Vcan be written as the sum of (finitely many) rank-one linear maps. Composing the inverse of the isomorphism with the linear functional obtained above results in a linear functional onHom(V,V). This linear functional is exactly the same as the trace.
Using the definition of trace as the sum of diagonal elements, the matrix formulatr(AB) = tr(BA)is straightforward to prove, and was given above. In the present perspective, one is considering linear mapsSandT, and viewing them as sums of rank-one maps, so that there are linear functionalsφiandψjand nonzero vectorsviandwjsuch thatS(u) = Σφi(u)viandT(u) = Σψj(u)wjfor anyuinV. Then
for anyuinV. The rank-one linear mapu↦ψj(u)φi(wj)vihas traceψj(vi)φi(wj)and so
Following the same procedure withSandTreversed, one finds exactly the same formula, proving thattr(S∘T)equalstr(T∘S).
The above proof can be regarded as being based upon tensor products, given that the fundamental identity ofEnd(V)withV⊗V∗is equivalent to the expressibility of any linear map as the sum of rank-one linear maps. As such, the proof may be written in the notation of tensor products. Then one may consider the multilinear mapV×V∗×V×V∗→V⊗V∗given by sending(v,φ,w,ψ)toφ(w)v⊗ψ. Further composition with the trace map then results inφ(w)ψ(v), and this is unchanged if one were to have started with(w,ψ,v,φ)instead. One may also consider the bilinear mapEnd(V) × End(V) → End(V)given by sending(f,g)to the compositionf∘g, which is then induced by a linear mapEnd(V) ⊗ End(V) → End(V). It can be seen that this coincides with the linear mapV⊗V∗⊗V⊗V∗→V⊗V∗. The established symmetry upon composition with the trace map then establishes the equality of the two traces.[9]
For any finite dimensional vector spaceV, there is a natural linear mapF→V⊗V'; in the language of linear maps, it assigns to a scalarcthe linear mapc⋅idV. Sometimes this is calledcoevaluation map, and the traceV⊗V'→Fis calledevaluation map.[9]These structures can be axiomatized to definecategorical tracesin the abstract setting ofcategory theory.
|
https://en.wikipedia.org/wiki/Trace_(linear_algebra)#Properties
|
Inmathematics,statistics,finance,[1]andcomputer science, particularly inmachine learningandinverse problems,regularizationis a process that converts theanswer to a problemto a simpler one. It is often used in solvingill-posed problemsor to preventoverfitting.[2]
Although regularization procedures can be divided in many ways, the following delineation is particularly helpful:
In explicit regularization, independent of the problem or model, there is always a data term, that corresponds to a likelihood of the measurement, and a regularization term that corresponds to a prior. By combining both using Bayesian statistics, one can compute a posterior, that includes both information sources and therefore stabilizes the estimation process. By trading off both objectives, one chooses to be more aligned to the data or to enforce regularization (to prevent overfitting). There is a whole research branch dealing with all possible regularizations. In practice, one usually tries a specific regularization and then figures out the probability density that corresponds to that regularization to justify the choice. It can also be physically motivated by common sense or intuition.
Inmachine learning, the data term corresponds to the training data and the regularization is either the choice of the model or modifications to the algorithm. It is always intended to reduce thegeneralization error, i.e. the error score with the trained model on the evaluation set (testing data) and not the training data.[3]
One of the earliest uses of regularization isTikhonov regularization(ridge regression), related to the method of least squares.
Inmachine learning, a key challenge is enabling models to accurately predict outcomes on unseen data, not just on familiar training data. Regularization is crucial for addressingoverfitting—where a model memorizes training data details but cannot generalize to new data. The goal of regularization is to encourage models to learn the broader patterns within the data rather than memorizing it. Techniques likeearly stopping, L1 andL2 regularization, anddropoutare designed to prevent overfitting and underfitting, thereby enhancing the model's ability to adapt to and perform well with new data, thus improving model generalization.[4]
Stops training when validation performance deteriorates, preventing overfitting by halting before the model memorizes training data.[4]
Adds penalty terms to the cost function to discourage complex models:
In the context of neural networks, the Dropout technique repeatedly ignores random subsets of neurons during training, which simulates the training of multiple neural network architectures at once to improve generalization.[4]
Empirical learning of classifiers (from a finite data set) is always anunderdeterminedproblem, because it attempts to infer a function of anyx{\displaystyle x}given only examplesx1,x2,…,xn{\displaystyle x_{1},x_{2},\dots ,x_{n}}.
A regularization term (or regularizer)R(f){\displaystyle R(f)}is added to aloss function:minf∑i=1nV(f(xi),yi)+λR(f){\displaystyle \min _{f}\sum _{i=1}^{n}V(f(x_{i}),y_{i})+\lambda R(f)}whereV{\displaystyle V}is an underlying loss function that describes the cost of predictingf(x){\displaystyle f(x)}when the label isy{\displaystyle y}, such as thesquare lossorhinge loss; andλ{\displaystyle \lambda }is a parameter which controls the importance of the regularization term.R(f){\displaystyle R(f)}is typically chosen to impose a penalty on the complexity off{\displaystyle f}. Concrete notions of complexity used include restrictions forsmoothnessand bounds on thevector space norm.[5][page needed]
A theoretical justification for regularization is that it attempts to imposeOccam's razoron the solution (as depicted in the figure above, where the green function, the simpler one, may be preferred). From aBayesianpoint of view, many regularization techniques correspond to imposing certainpriordistributions on model parameters.[6]
Regularization can serve multiple purposes, including learning simpler models, inducing models to be sparse and introducing group structure[clarification needed]into the learning problem.
The same idea arose in many fields ofscience. A simple form of regularization applied tointegral equations(Tikhonov regularization) is essentially a trade-off between fitting the data and reducing a norm of the solution. More recently, non-linear regularization methods, includingtotal variation regularization, have become popular.
Regularization can be motivated as a technique to improve the generalizability of a learned model.
The goal of this learning problem is to find a function that fits or predicts the outcome (label) that minimizes the expected error over all possible inputs and labels. The expected error of a functionfn{\displaystyle f_{n}}is:I[fn]=∫X×YV(fn(x),y)ρ(x,y)dxdy{\displaystyle I[f_{n}]=\int _{X\times Y}V(f_{n}(x),y)\rho (x,y)\,dx\,dy}whereX{\displaystyle X}andY{\displaystyle Y}are the domains of input datax{\displaystyle x}and their labelsy{\displaystyle y}respectively.
Typically in learning problems, only a subset of input data and labels are available, measured with some noise. Therefore, the expected error is unmeasurable, and the best surrogate available is the empirical error over theN{\displaystyle N}available samples:IS[fn]=1n∑i=1NV(fn(x^i),y^i){\displaystyle I_{S}[f_{n}]={\frac {1}{n}}\sum _{i=1}^{N}V(f_{n}({\hat {x}}_{i}),{\hat {y}}_{i})}Without bounds on the complexity of the function space (formally, thereproducing kernel Hilbert space) available, a model will be learned that incurs zero loss on the surrogate empirical error. If measurements (e.g. ofxi{\displaystyle x_{i}}) were made with noise, this model may suffer fromoverfittingand display poor expected error. Regularization introduces a penalty for exploring certain regions of the function space used to build the model, which can improve generalization.
These techniques are named forAndrey Nikolayevich Tikhonov, who applied regularization tointegral equationsand made important contributions in many other areas.
When learning a linear functionf{\displaystyle f}, characterized by an unknownvectorw{\displaystyle w}such thatf(x)=w⋅x{\displaystyle f(x)=w\cdot x}, one can add theL2{\displaystyle L_{2}}-norm of the vectorw{\displaystyle w}to the loss expression in order to prefer solutions with smaller norms. Tikhonov regularization is one of the most common forms. It is also known as ridge regression. It is expressed as:minw∑i=1nV(x^i⋅w,y^i)+λ‖w‖22,{\displaystyle \min _{w}\sum _{i=1}^{n}V({\hat {x}}_{i}\cdot w,{\hat {y}}_{i})+\lambda \left\|w\right\|_{2}^{2},}where(x^i,y^i),1≤i≤n,{\displaystyle ({\hat {x}}_{i},{\hat {y}}_{i}),\,1\leq i\leq n,}would represent samples used for training.
In the case of a general function, the norm of the function in itsreproducing kernel Hilbert spaceis:minf∑i=1nV(f(x^i),y^i)+λ‖f‖H2{\displaystyle \min _{f}\sum _{i=1}^{n}V(f({\hat {x}}_{i}),{\hat {y}}_{i})+\lambda \left\|f\right\|_{\mathcal {H}}^{2}}
As theL2{\displaystyle L_{2}}norm isdifferentiable, learning can be advanced bygradient descent.
The learning problem with theleast squaresloss function and Tikhonov regularization can be solved analytically. Written in matrix form, the optimalw{\displaystyle w}is the one for which the gradient of the loss function with respect tow{\displaystyle w}is 0.minw1n(X^w−Y)T(X^w−Y)+λ‖w‖22{\displaystyle \min _{w}{\frac {1}{n}}\left({\hat {X}}w-Y\right)^{\mathsf {T}}\left({\hat {X}}w-Y\right)+\lambda \left\|w\right\|_{2}^{2}}∇w=2nX^T(X^w−Y)+2λw{\displaystyle \nabla _{w}={\frac {2}{n}}{\hat {X}}^{\mathsf {T}}\left({\hat {X}}w-Y\right)+2\lambda w}0=X^T(X^w−Y)+nλw{\displaystyle 0={\hat {X}}^{\mathsf {T}}\left({\hat {X}}w-Y\right)+n\lambda w}w=(X^TX^+λnI)−1(X^TY){\displaystyle w=\left({\hat {X}}^{\mathsf {T}}{\hat {X}}+\lambda nI\right)^{-1}\left({\hat {X}}^{\mathsf {T}}Y\right)}where the third statement is afirst-order condition.
By construction of the optimization problem, other values ofw{\displaystyle w}give larger values for the loss function. This can be verified by examining thesecond derivative∇ww{\displaystyle \nabla _{ww}}.
During training, this algorithm takesO(d3+nd2){\displaystyle O(d^{3}+nd^{2})}time. The terms correspond to the matrix inversion and calculatingXTX{\displaystyle X^{\mathsf {T}}X}, respectively. Testing takesO(nd){\displaystyle O(nd)}time.
Early stopping can be viewed as regularization in time. Intuitively, a training procedure such as gradient descent tends to learn more and more complex functions with increasing iterations. By regularizing for time, model complexity can be controlled, improving generalization.
Early stopping is implemented using one data set for training, one statistically independent data set for validation and another for testing. The model is trained until performance on the validation set no longer improves and then applied to the test set.
Consider the finite approximation ofNeumann seriesfor an invertible matrixAwhere‖I−A‖<1{\displaystyle \left\|I-A\right\|<1}:∑i=0T−1(I−A)i≈A−1{\displaystyle \sum _{i=0}^{T-1}\left(I-A\right)^{i}\approx A^{-1}}
This can be used to approximate the analytical solution of unregularized least squares, ifγis introduced to ensure the norm is less than one.wT=γn∑i=0T−1(I−γnX^TX^)iX^TY^{\displaystyle w_{T}={\frac {\gamma }{n}}\sum _{i=0}^{T-1}\left(I-{\frac {\gamma }{n}}{\hat {X}}^{\mathsf {T}}{\hat {X}}\right)^{i}{\hat {X}}^{\mathsf {T}}{\hat {Y}}}
The exact solution to the unregularized least squares learning problem minimizes the empirical error, but may fail. By limitingT, the only free parameter in the algorithm above, the problem is regularized for time, which may improve its generalization.
The algorithm above is equivalent to restricting the number of gradient descent iterations for the empirical riskIs[w]=12n‖X^w−Y^‖Rn2{\displaystyle I_{s}[w]={\frac {1}{2n}}\left\|{\hat {X}}w-{\hat {Y}}\right\|_{\mathbb {R} ^{n}}^{2}}with the gradient descent update:w0=0wt+1=(I−γnX^TX^)wt+γnX^TY^{\displaystyle {\begin{aligned}w_{0}&=0\\[1ex]w_{t+1}&=\left(I-{\frac {\gamma }{n}}{\hat {X}}^{\mathsf {T}}{\hat {X}}\right)w_{t}+{\frac {\gamma }{n}}{\hat {X}}^{\mathsf {T}}{\hat {Y}}\end{aligned}}}
The base case is trivial. The inductive case is proved as follows:wT=(I−γnX^TX^)γn∑i=0T−2(I−γnX^TX^)iX^TY^+γnX^TY^=γn∑i=1T−1(I−γnX^TX^)iX^TY^+γnX^TY^=γn∑i=0T−1(I−γnX^TX^)iX^TY^{\displaystyle {\begin{aligned}w_{T}&=\left(I-{\frac {\gamma }{n}}{\hat {X}}^{\mathsf {T}}{\hat {X}}\right){\frac {\gamma }{n}}\sum _{i=0}^{T-2}\left(I-{\frac {\gamma }{n}}{\hat {X}}^{\mathsf {T}}{\hat {X}}\right)^{i}{\hat {X}}^{\mathsf {T}}{\hat {Y}}+{\frac {\gamma }{n}}{\hat {X}}^{\mathsf {T}}{\hat {Y}}\\[1ex]&={\frac {\gamma }{n}}\sum _{i=1}^{T-1}\left(I-{\frac {\gamma }{n}}{\hat {X}}^{\mathsf {T}}{\hat {X}}\right)^{i}{\hat {X}}^{\mathsf {T}}{\hat {Y}}+{\frac {\gamma }{n}}{\hat {X}}^{\mathsf {T}}{\hat {Y}}\\[1ex]&={\frac {\gamma }{n}}\sum _{i=0}^{T-1}\left(I-{\frac {\gamma }{n}}{\hat {X}}^{\mathsf {T}}{\hat {X}}\right)^{i}{\hat {X}}^{\mathsf {T}}{\hat {Y}}\end{aligned}}}
Assume that a dictionaryϕj{\displaystyle \phi _{j}}with dimensionp{\displaystyle p}is given such that a function in the function space can be expressed as:f(x)=∑j=1pϕj(x)wj{\displaystyle f(x)=\sum _{j=1}^{p}\phi _{j}(x)w_{j}}
Enforcing a sparsity constraint onw{\displaystyle w}can lead to simpler and more interpretable models. This is useful in many real-life applications such ascomputational biology. An example is developing a simple predictive test for a disease in order to minimize the cost of performing medical tests while maximizing predictive power.
A sensible sparsity constraint is theL0{\displaystyle L_{0}}norm‖w‖0{\displaystyle \|w\|_{0}}, defined as the number of non-zero elements inw{\displaystyle w}. Solving aL0{\displaystyle L_{0}}regularized learning problem, however, has been demonstrated to beNP-hard.[7]
TheL1{\displaystyle L_{1}}norm(see alsoNorms) can be used to approximate the optimalL0{\displaystyle L_{0}}norm via convex relaxation. It can be shown that theL1{\displaystyle L_{1}}norm induces sparsity. In the case of least squares, this problem is known asLASSOin statistics andbasis pursuitin signal processing.minw∈Rp1n‖X^w−Y^‖2+λ‖w‖1{\displaystyle \min _{w\in \mathbb {R} ^{p}}{\frac {1}{n}}\left\|{\hat {X}}w-{\hat {Y}}\right\|^{2}+\lambda \left\|w\right\|_{1}}
L1{\displaystyle L_{1}}regularization can occasionally produce non-unique solutions. A simple example is provided in the figure when the space of possible solutions lies on a 45 degree line. This can be problematic for certain applications, and is overcome by combiningL1{\displaystyle L_{1}}withL2{\displaystyle L_{2}}regularization inelastic net regularization, which takes the following form:minw∈Rp1n‖X^w−Y^‖2+λ(α‖w‖1+(1−α)‖w‖22),α∈[0,1]{\displaystyle \min _{w\in \mathbb {R} ^{p}}{\frac {1}{n}}\left\|{\hat {X}}w-{\hat {Y}}\right\|^{2}+\lambda \left(\alpha \left\|w\right\|_{1}+(1-\alpha )\left\|w\right\|_{2}^{2}\right),\alpha \in [0,1]}
Elastic net regularization tends to have a grouping effect, where correlated input features are assigned equal weights.
Elastic net regularization is commonly used in practice and is implemented in many machine learning libraries.
While theL1{\displaystyle L_{1}}norm does not result in an NP-hard problem, theL1{\displaystyle L_{1}}norm is convex but is not strictly differentiable due to the kink at x = 0.Subgradient methodswhich rely on thesubderivativecan be used to solveL1{\displaystyle L_{1}}regularized learning problems. However, faster convergence can be achieved through proximal methods.
For a problemminw∈HF(w)+R(w){\displaystyle \min _{w\in H}F(w)+R(w)}such thatF{\displaystyle F}is convex, continuous, differentiable, with Lipschitz continuous gradient (such as the least squares loss function), andR{\displaystyle R}is convex, continuous, and proper, then the proximal method to solve the problem is as follows. First define theproximal operatorproxR(v)=argminw∈RD{R(w)+12‖w−v‖2},{\displaystyle \operatorname {prox} _{R}(v)=\mathop {\operatorname {argmin} } _{w\in \mathbb {R} ^{D}}\left\{R(w)+{\frac {1}{2}}\left\|w-v\right\|^{2}\right\},}and then iteratewk+1=proxγ,R(wk−γ∇F(wk)){\displaystyle w_{k+1}=\mathop {\operatorname {prox} } _{\gamma ,R}\left(w_{k}-\gamma \nabla F(w_{k})\right)}
The proximal method iteratively performs gradient descent and then projects the result back into the space permitted byR{\displaystyle R}.
WhenR{\displaystyle R}is theL1regularizer, the proximal operator is equivalent to the soft-thresholding operator,Sλ(v)f(n)={vi−λ,ifvi>λ0,ifvi∈[−λ,λ]vi+λ,ifvi<−λ{\displaystyle S_{\lambda }(v)f(n)={\begin{cases}v_{i}-\lambda ,&{\text{if }}v_{i}>\lambda \\0,&{\text{if }}v_{i}\in [-\lambda ,\lambda ]\\v_{i}+\lambda ,&{\text{if }}v_{i}<-\lambda \end{cases}}}
This allows for efficient computation.
Groups of features can be regularized by a sparsity constraint, which can be useful for expressing certain prior knowledge into an optimization problem.
In the case of a linear model with non-overlapping known groups, a regularizer can be defined:R(w)=∑g=1G‖wg‖2,{\displaystyle R(w)=\sum _{g=1}^{G}\left\|w_{g}\right\|_{2},}where‖wg‖2=∑j=1|Gg|(wgj)2{\displaystyle \|w_{g}\|_{2}={\sqrt {\sum _{j=1}^{|G_{g}|}\left(w_{g}^{j}\right)^{2}}}}
This can be viewed as inducing a regularizer over theL2{\displaystyle L_{2}}norm over members of each group followed by anL1{\displaystyle L_{1}}norm over groups.
This can be solved by the proximal method, where the proximal operator is a block-wise soft-thresholding function:
proxλ,R,g(wg)={(1−λ‖wg‖2)wg,if‖wg‖2>λ0,if‖wg‖2≤λ{\displaystyle \operatorname {prox} \limits _{\lambda ,R,g}(w_{g})={\begin{cases}\left(1-{\dfrac {\lambda }{\left\|w_{g}\right\|_{2}}}\right)w_{g},&{\text{if }}\left\|w_{g}\right\|_{2}>\lambda \\[1ex]0,&{\text{if }}\|w_{g}\|_{2}\leq \lambda \end{cases}}}
The algorithm described for group sparsity without overlaps can be applied to the case where groups do overlap, in certain situations. This will likely result in some groups with all zero elements, and other groups with some non-zero and some zero elements.
If it is desired to preserve the group structure, a new regularizer can be defined:R(w)=inf{∑g=1G‖wg‖2:w=∑g=1Gw¯g}{\displaystyle R(w)=\inf \left\{\sum _{g=1}^{G}\|w_{g}\|_{2}:w=\sum _{g=1}^{G}{\bar {w}}_{g}\right\}}
For eachwg{\displaystyle w_{g}},w¯g{\displaystyle {\bar {w}}_{g}}is defined as the vector such that the restriction ofw¯g{\displaystyle {\bar {w}}_{g}}to the groupg{\displaystyle g}equalswg{\displaystyle w_{g}}and all other entries ofw¯g{\displaystyle {\bar {w}}_{g}}are zero. The regularizer finds the optimal disintegration ofw{\displaystyle w}into parts. It can be viewed as duplicating all elements that exist in multiple groups. Learning problems with this regularizer can also be solved with the proximal method with a complication. The proximal operator cannot be computed in closed form, but can be effectively solved iteratively, inducing an inner iteration within the proximal method iteration.
When labels are more expensive to gather than input examples, semi-supervised learning can be useful. Regularizers have been designed to guide learning algorithms to learn models that respect the structure of unsupervised training samples. If a symmetric weight matrixW{\displaystyle W}is given, a regularizer can be defined:R(f)=∑i,jwij(f(xi)−f(xj))2{\displaystyle R(f)=\sum _{i,j}w_{ij}\left(f(x_{i})-f(x_{j})\right)^{2}}
IfWij{\displaystyle W_{ij}}encodes the result of some distance metric for pointsxi{\displaystyle x_{i}}andxj{\displaystyle x_{j}}, it is desirable thatf(xi)≈f(xj){\displaystyle f(x_{i})\approx f(x_{j})}. This regularizer captures this intuition, and is equivalent to:R(f)=f¯TLf¯{\displaystyle R(f)={\bar {f}}^{\mathsf {T}}L{\bar {f}}}whereL=D−W{\displaystyle L=D-W}is theLaplacian matrixof the graph induced byW{\displaystyle W}.
The optimization problemminf∈RmR(f),m=u+l{\displaystyle \min _{f\in \mathbb {R} ^{m}}R(f),m=u+l}can be solved analytically if the constraintf(xi)=yi{\displaystyle f(x_{i})=y_{i}}is applied for all supervised samples. The labeled part of the vectorf{\displaystyle f}is therefore obvious. The unlabeled part off{\displaystyle f}is solved for by:minfu∈RufTLf=minfu∈Ru{fuTLuufu+flTLlufu+fuTLulfl}{\displaystyle \min _{f_{u}\in \mathbb {R} ^{u}}f^{\mathsf {T}}Lf=\min _{f_{u}\in \mathbb {R} ^{u}}\left\{f_{u}^{\mathsf {T}}L_{uu}f_{u}+f_{l}^{\mathsf {T}}L_{lu}f_{u}+f_{u}^{\mathsf {T}}L_{ul}f_{l}\right\}}∇fu=2Luufu+2LulY{\displaystyle \nabla _{f_{u}}=2L_{uu}f_{u}+2L_{ul}Y}fu=Luu†(LulY){\displaystyle f_{u}=L_{uu}^{\dagger }\left(L_{ul}Y\right)}The pseudo-inverse can be taken becauseLul{\displaystyle L_{ul}}has the same range asLuu{\displaystyle L_{uu}}.
In the case of multitask learning,T{\displaystyle T}problems are considered simultaneously, each related in some way. The goal is to learnT{\displaystyle T}functions, ideally borrowing strength from the relatedness of tasks, that have predictive power. This is equivalent to learning the matrixW:T×D{\displaystyle W:T\times D}.
R(w)=∑i=1D‖W‖2,1{\displaystyle R(w)=\sum _{i=1}^{D}\left\|W\right\|_{2,1}}
This regularizer defines an L2 norm on each column and an L1 norm over all columns. It can be solved by proximal methods.
R(w)=‖σ(W)‖1{\displaystyle R(w)=\left\|\sigma (W)\right\|_{1}}whereσ(W){\displaystyle \sigma (W)}is theeigenvaluesin thesingular value decompositionofW{\displaystyle W}.
R(f1⋯fT)=∑t=1T‖ft−1T∑s=1Tfs‖Hk2{\displaystyle R(f_{1}\cdots f_{T})=\sum _{t=1}^{T}\left\|f_{t}-{\frac {1}{T}}\sum _{s=1}^{T}f_{s}\right\|_{H_{k}}^{2}}
This regularizer constrains the functions learned for each task to be similar to the overall average of the functions across all tasks. This is useful for expressing prior information that each task is expected to share with each other task. An example is predicting blood iron levels measured at different times of the day, where each task represents an individual.
R(f1⋯fT)=∑r=1C∑t∈I(r)‖ft−1I(r)∑s∈I(r)fs‖Hk2{\displaystyle R(f_{1}\cdots f_{T})=\sum _{r=1}^{C}\sum _{t\in I(r)}\left\|f_{t}-{\frac {1}{I(r)}}\sum _{s\in I(r)}f_{s}\right\|_{H_{k}}^{2}}whereI(r){\displaystyle I(r)}is a cluster of tasks.
This regularizer is similar to the mean-constrained regularizer, but instead enforces similarity between tasks within the same cluster. This can capture more complex prior information. This technique has been used to predictNetflixrecommendations. A cluster would correspond to a group of people who share similar preferences.
More generally than above, similarity between tasks can be defined by a function. The regularizer encourages the model to learn similar functions for similar tasks.R(f1⋯fT)=∑t,s=1,t≠sT‖ft−fs‖2Mts{\displaystyle R(f_{1}\cdots f_{T})=\sum _{t,s=1,t\neq s}^{\mathsf {T}}\left\|f_{t}-f_{s}\right\|^{2}M_{ts}}for a given symmetricsimilarity matrixM{\displaystyle M}.
Bayesian learningmethods make use of aprior probabilitythat (usually) gives lower probability to more complex models. Well-known model selection techniques include theAkaike information criterion(AIC),minimum description length(MDL), and theBayesian information criterion(BIC). Alternative methods of controlling overfitting not involving regularization includecross-validation.
Examples of applications of different methods of regularization to thelinear modelare:
|
https://en.wikipedia.org/wiki/Regularization_(mathematics)#Other_uses_of_regularization_in_statistics_and_machine_learning
|
Inelectrical engineering, theaverage rectified value(ARV) of a quantity is theaverageof itsabsolute value. The ARV of an alternating current indicates which direct current would transport the same amount of electrical charge within the same period of time. On the other hand theRMSdescribes which direct current delivers the same amount of power within the same time period.
Theaverageof a symmetric alternating value is zero and it is therefore not useful to characterize it. Thus the easiest way to determine a quantitative measurement size is to use the average rectified value. The average rectified value is mainly used to characterizealternating voltage and current. It can be computed by averaging the absolute value of awaveformover one full period of the waveform.[1]
While conceptually similar to theroot mean square(RMS), ARV will differ from it whenever a function's absolute value varies locally, as the former then increases disproportionately. The difference is expressed by theform factor[2]
This engineering-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Average_rectified_value
|
Inmathematics,Pythagorean additionis abinary operationon thereal numbersthat computes the length of thehypotenuseof aright triangle, given its two sides. Like the more familiar addition and multiplication operations ofarithmetic, it is bothassociativeandcommutative.
This operation can be used in the conversion ofCartesian coordinatestopolar coordinates, and in the calculation ofEuclidean distance. It also provides a simple notation and terminology for thediameterof acuboid, theenergy-momentum relationinphysics, and the overall noise from independent sources of noise. In its applications tosignal processingandpropagationofmeasurement uncertainty, the same operation is also calledaddition in quadrature.[1]A scaled version of this operation gives thequadratic meanorroot mean square.
It is implemented in many programming libraries as thehypotfunction, in a way designed to avoid errors arising due to limited-precision calculations performed on computers.Donald Knuthhas written that "Most of the square root operations in computer programs could probably be avoided if [Pythagorean addition] were more widely available, because people seem to want square roots primarily when they are computing distances."[2]
5
According to thePythagorean theorem, for aright trianglewith side lengthsa{\displaystyle a}andb{\displaystyle b}, the length of thehypotenusecan be calculated asa2+b2.{\textstyle {\sqrt {a^{2}+b^{2}}}.}This formula defines the Pythagorean addition operation, denoted here as⊕{\displaystyle \oplus }: for any tworeal numbersa{\displaystyle a}andb{\displaystyle b}, the result of this operation is defined to be[3]a⊕b=a2+b2).{\displaystyle a\oplus b={\sqrt {a^{2}+b^{2}{\vphantom {)}}}}.}For instance, thespecial right trianglebased on thePythagorean triple(3,4,5){\displaystyle (3,4,5)}gives3⊕4=5{\displaystyle 3\oplus 4=5}.[4]However, theintegerresult of this example is unusual: for other integer arguments, Pythagorean addition can produce aquadratic irrational numberas its result.[5]
The operation⊕{\displaystyle \oplus }isassociative[6][7]andcommutative.[6][8]Therefore, if three or more numbers are to be combined with this operation, the order of combination makes no difference to the result:x1⊕x2⊕⋯⊕xn=x12+x22+⋯+xn2.{\displaystyle x_{1}\oplus x_{2}\oplus \cdots \oplus x_{n}={\sqrt {x_{1}^{2}+x_{2}^{2}+\cdots +x_{n}^{2}}}.}Additionally, on the non-negative real numbers, zero is anidentity elementfor Pythagorean addition. On numbers that can be negative, the Pythagorean sum with zero gives theabsolute value:[3]x⊕0=|x|.{\displaystyle x\oplus 0=|x|.}The three properties of associativity, commutativity, and having an identity element (on the non-negative numbers) are the defining properties of acommutative monoid.[9][10]
TheEuclidean distancebetween two points in theEuclidean plane, given by theirCartesian coordinates(x1,y1){\displaystyle (x_{1},y_{1})}and(x2,y2){\displaystyle (x_{2},y_{2})}, is[11](x1−x2)⊕(y1−y2).{\displaystyle (x_{1}-x_{2})\oplus (y_{1}-y_{2}).}In the same way, the distance between three-dimensional points(x1,y1,z1){\displaystyle (x_{1},y_{1},z_{1})}and(x2,y2,z2){\displaystyle (x_{2},y_{2},z_{2})}can be found by repeated Pythagorean addition as[11](x1−x2)⊕(y1−y2)⊕(z1−z2).{\displaystyle (x_{1}-x_{2})\oplus (y_{1}-y_{2})\oplus (z_{1}-z_{2}).}
Repeated Pythagorean addition can also find thediagonallength of arectangleand thediameterof arectangular cuboid. For a rectangle with sidesa{\displaystyle a}andb{\displaystyle b}, the diagonal length isa⊕b{\displaystyle a\oplus b}.[12][13]For a cuboid, the diameter is the longest distance between two points, the length of thebody diagonalof the cuboid. For a cuboid with side lengthsa{\displaystyle a},b{\displaystyle b}, andc{\displaystyle c}, this length isa⊕b⊕c{\displaystyle a\oplus b\oplus c}.[13]
Pythagorean addition (and its implementation as thehypotfunction) is often used together with theatan2function (a two-parameter form of thearctangent) to convert fromCartesian coordinates(x,y){\displaystyle (x,y)}topolar coordinates(r,θ){\displaystyle (r,\theta )}:[14][15]r=x⊕y=hypot(x,y)θ=atan2(y,x).{\displaystyle {\begin{aligned}r&=x\oplus y={\mathsf {hypot}}(x,y)\\\theta &={\mathsf {atan2}}(y,x).\\\end{aligned}}}
Theroot mean squareor quadratic mean of a finite set ofn{\displaystyle n}numbers is1n{\displaystyle {\tfrac {1}{\sqrt {n}}}}times their Pythagorean sum. This is ageneralized meanof the numbers.[16]
Thestandard deviationof a collection of observations is the quadratic mean of their individual deviations from the mean. When two or more independent random variables are added, the standard deviation of their sum is the Pythagorean sum of their standard deviations.[16]Thus, the Pythagorean sum itself can be interpreted as giving the amount of overall noise when combining independent sources of noise.[17]
If theengineering tolerancesof different parts of an assembly are treated as independent noise, they can be combined using a Pythagorean sum.[18]Inexperimental sciencessuch asphysics, addition in quadrature is often used to combine different sources ofmeasurement uncertainty.[19]However, this method ofpropagation of uncertaintyapplies only when there is no correlation between sources of uncertainty,[20]and it has been criticized for conflating experimental noise withsystematic errors.[21]
Theenergy-momentum relationinphysics, describing the energy of a moving particle, can be expressed as the Pythagorean sumE=mc2⊕pc,{\displaystyle E=mc^{2}\oplus pc,}wherem{\displaystyle m}is therest massof a particle,p{\displaystyle p}is itsmomentum,c{\displaystyle c}is thespeed of light, andE{\displaystyle E}is the particle's resultingrelativistic energy.[22]
When combining signals, it can be a useful design technique to arrange for the combined signals to beorthogonalinpolarizationorphase, so that they add in quadrature.[23][24]In earlyradio engineering, this idea was used to designdirectional antennas, allowing signals to be received while nullifying the interference from signals coming from other directions.[23]When the same technique is applied in software to obtain a directional signal from a radio orultrasoundphased array, Pythagorean addition may be used to combine the signals.[25]Other recent applications of this idea include improved efficiency in thefrequency conversionoflasers.[24]
In thepsychophysicsofhaptic perception, Pythagorean addition has been proposed as a model for the perceived intensity ofvibrationwhen two kinds of vibration are combined.[26]
Inimage processing, theSobel operatorforedge detectionconsists of aconvolutionstep to determine thegradientof an image followed by a Pythagorean sum at each pixel to determine the magnitude of the gradient.[27]
In a 1983 paper,Cleve Molerand Donald Morrison described aniterative methodfor computing Pythagorean sums, without taking square roots.[3]This was soon recognized to be an instance ofHalley's method,[8]and extended to analogous operations onmatrices.[7]
Although many modern implementations of this operation instead compute Pythagorean sums by reducing the problem to thesquare rootfunction,
they do so in a way that has been designed to avoid errors arising from the limited-precision calculations performed on computers. If calculated using the natural formula,r=x2+y2,{\displaystyle r={\sqrt {x^{2}+y^{2}}},}the squares of very large or small values ofx{\displaystyle x}andy{\displaystyle y}may exceed the range ofmachine precisionwhen calculated on a computer. This may to an inaccurate result caused byarithmetic underflowandoverflow, although when overflow and underflow do not occur the output is within twoulpof the exact result.[28][29][30]Common implementations of thehypotfunction rearrange this calculation in a way that avoids the problem of overflow and underflow and are even more precise.[31]
If either input tohypotis infinite, the result is infinite. Because this is true for all possible values of the other input, theIEEE 754floating-point standard requires that this remains true even when the other input isnot a number(NaN).[32]
The difficulty with the naive implementation is thatx2+y2{\displaystyle x^{2}+y^{2}}may overflow or underflow, unless the intermediate result is computed withextended precision. A common implementation technique is to exchange the values, if necessary, so that|x|≥|y|{\displaystyle |x|\geq |y|}, and then to use the equivalent formr=|x|1+(yx)2.{\displaystyle r=|x|{\sqrt {1+\left({\frac {y}{x}}\right)^{2}}}.}
The computation ofy/x{\displaystyle y/x}cannot overflow unless bothx{\displaystyle x}andy{\displaystyle y}are zero. Ify/x{\displaystyle y/x}underflows, the final result is equal to|x|{\displaystyle |x|}, which is correct within the precision of the calculation. The square root is computed of a value between 1 and 2. Finally, the multiplication by|x|{\displaystyle |x|}cannot underflow, and overflows only when the result is too large to represent.[31]
One drawback of this rearrangement is the additional division byx{\displaystyle x}, which increases both the time and inaccuracy of the computation.
More complex implementations avoid these costs by dividing the inputs into more cases:
Additional techniques allow the result to be computed more accurately than the naive algorithm, e.g. to less than oneulp.[31]Researchers have also developed analogous algorithms for computing Pythagorean sums of more than two values.[33]
Thealpha max plus beta min algorithmis a high-speed approximation of Pythagorean addition using only comparison, multiplication, and addition, producing a value whose error is less than 4% of the correct result. It is computed asa⊕b≈α⋅max(a,b)+β⋅min(a,b){\displaystyle a\oplus b\approx \alpha \cdot \max(a,b)+\beta \cdot \min(a,b)}for a careful choice of parametersα{\displaystyle \alpha }andβ{\displaystyle \beta }.[34]
Pythagorean addition function is present as thehypotfunction in manyprogramming languagesand their libraries. These include:CSS,[35]D,[36]Fortran,[37]Go,[38]JavaScript(since ES2015),[11]Julia,[39]MATLAB,[40]PHP,[41]andPython.[42]C++11includes a two-argument version ofhypot, and a three-argument version forx⊕y⊕z{\displaystyle x\oplus y\oplus z}has been included sinceC++17.[43]TheJavaimplementation ofhypot[44]can be used by its interoperable JVM-based languages includingApache Groovy,Clojure,Kotlin, andScala.[45]Similarly, the version ofhypotincluded withRubyextends to Ruby-baseddomain-specific languagessuch asProgress Chef.[46]InRust,hypotis implemented as amethodoffloating pointobjects rather than as a two-argument function.[47]
Metafonthas Pythagorean addition and subtraction as built-in operations, under the symbols++and+-+respectively.[2]
ThePythagorean theoremon which this operation is based was studied in ancientGreek mathematics, and may have been known earlier inEgyptian mathematicsandBabylonian mathematics; seePythagorean theorem § History.[48]However, its use for computing distances in Cartesian coordinates could not come until afterRené Descartesinvented these coordinates in 1637; the formula for distance from these coordinates was published byAlexis Clairautin 1731.[49]
The terms "Pythagorean addition" and "Pythagorean sum" for this operation have been used at least since the 1950s,[18][50]and its use in signal processing as "addition in quadrature" goes back at least to 1919.[23]
From the 1920s to the 1940s, before the widespread use of computers, multiple designers ofslide rulesincluded square-root scales in their devices, allowing Pythagorean sums to be calculated mechanically.[51][52][53]Researchers have also investigatedanalog circuitsfor approximating the value of Pythagorean sums.[54]
|
https://en.wikipedia.org/wiki/Pythagorean_addition
|
For the measurement of analternating currentthe signal is often converted into adirect currentof equivalent value, theroot mean square(RMS). Simple instrumentation and signal converters carry out this conversion by filtering the signal into anaverage rectified valueand applying a correction factor. The value of the correction factor applied is only correct if the input signal issinusoidal.
True RMS provides a more correct value that is proportional to the square root of the average of the square of the curve, and not to the average of the absolute value. For any givenwaveform, the ratio of these two averages is constant and, as most measurements are made on what are (nominally) sine waves, the correction factor assumes this waveform; but any distortion or offsets will lead to errors. To achieve this, atrue RMS converterrequires a more complex circuit.
If a waveform has been digitized, the correct RMS value may be calculated directly. Most digital and PC-basedoscilloscopesinclude a function to give the RMS value of a waveform. The precision and the bandwidth of the conversion is entirely dependent on the analog to digital conversion. In most cases, true RMS measurements are made on repetitive waveforms, and under such conditions digital oscilloscopes (and a few sophisticated sampling multimeters) are able to achieve very high bandwidths as they sample at much higher sampling frequency than the signal frequency to obtain a stroboscopic effect.
The RMS value of analternating currentis also known as itsheating value, as it is a voltage which is equivalent to thedirect currentvalue that would be required to get the same heating effect. For example, if 120 V AC RMS is applied to a resistiveheating elementit would heat up by exactly the same amount as if 120 V DC were applied.
This principle was exploited in early thermal converters. The AC signal would be applied to a small heating element that was matched with athermistor, which could be used in a DC measuring circuit.
The technique is not very precise but it will measure any waveform at any frequency (except for extremely low frequencies, where the thermistor's thermal capacitance is too small so that its temperature is fluctuating too much). A big drawback is that it is low-impedance: that is, the power used to heat the thermistor comes from the circuit being measured. If the circuit being measured can support the heating current, then it is possible to make a post-measurement calculation to correct the effect, as the impedance of the heating element is known. If the signal is small then a pre-amplifier is necessary, and the measuring capabilities of the instrument will be limited by this pre-amplifier. In radio frequency (RF) work, the low impedance is not necessarily a drawback since 50 ohm driving and terminating impedances are widely used.
Thermal converters have become rare, but are still used by radio hams and hobbyists, who may remove the thermal element of an old unreliable instrument and incorporate it into a modern design of their own construction. Additionally, at very high frequencies (microwave), RF power meters still use thermal techniques to convert the RF energy to a voltage. Thermal-based power meters are the norm for millimeter wave(MMW)RF work.
Analog electronic circuits may use:
Unlike thermal converters they are subject tobandwidthlimitations which makes them unsuitable for mostRFwork. The circuitry before time averaging is particularly crucial for high-frequency performance. Theslew ratelimitation of the operational amplifier used to create the absolute value (especially at low input signal levels) tends to make the second method the poorest at high frequencies, while the FET method can work close to VHF. Specialist techniques are required to produce sufficiently accurate integrated circuits for complex analog calculations, and very often meters equipped with such circuits offer true RMS conversion as an optional extra with a significant price increase.
|
https://en.wikipedia.org/wiki/True_RMS_converter
|
Algorithms for calculating varianceplay a major role incomputational statistics. A key difficulty in the design of goodalgorithmsfor this problem is that formulas for thevariancemay involve sums of squares, which can lead tonumerical instabilityas well as toarithmetic overflowwhen dealing with large values.
A formula for calculating the variance of an entirepopulationof sizeNis:
UsingBessel's correctionto calculate anunbiasedestimate of the population variance from a finitesampleofnobservations, the formula is:
Therefore, a naïve algorithm to calculate the estimated variance is given by the following:
This algorithm can easily be adapted to compute the variance of a finite population: simply divide byninstead ofn− 1 on the last line.
BecauseSumSqand(Sum×Sum)/ncan be very similar numbers,cancellationcan lead to theprecisionof the result to be much less than the inherent precision of thefloating-point arithmeticused to perform the computation. Thus this algorithm should not be used in practice,[1][2]and several alternate, numerically stable, algorithms have been proposed.[3]This is particularly bad if the standard deviation is small relative to the mean.
The variance isinvariantwith respect to changes in alocation parameter, a property which can be used to avoid the catastrophic cancellation in this formula.
withK{\displaystyle K}any constant, which leads to the new formula
the closerK{\displaystyle K}is to the mean value the more accurate the result will be, but just choosing a value inside the
samples range will guarantee the desired stability. If the values(xi−K){\displaystyle (x_{i}-K)}are small then there are no problems with the sum of its squares, on the contrary, if they are large it necessarily means that the variance is large as well. In any case the second term in the formula is always smaller than the first one therefore no cancellation may occur.[2]
If just the first sample is taken asK{\displaystyle K}the algorithm can be written inPython programming languageas
This formula also facilitates the incremental computation that can be expressed as
An alternative approach, using a different formula for the variance, first computes the sample mean,
and then computes the sum of the squares of the differences from the mean,
wheresis the standard deviation. This is given by the following code:
This algorithm is numerically stable ifnis small.[1][4]However, the results of both of these simple algorithms ("naïve" and "two-pass") can depend inordinately on the ordering of the data and can give poor results for very large data sets due to repeated roundoff error in the accumulation of the sums. Techniques such ascompensated summationcan be used to combat this error to a degree.
It is often useful to be able to compute the variance in asingle pass, inspecting each valuexi{\displaystyle x_{i}}only once; for example, when the data is being collected without enough storage to keep all the values, or when costs of memory access dominate those of computation. For such anonline algorithm, arecurrence relationis required between quantities from which the required statistics can be calculated in a numerically stable fashion.
The following formulas can be used to update themeanand (estimated) variance of the sequence, for an additional elementxn. Here,x¯n=1n∑i=1nxi{\textstyle {\overline {x}}_{n}={\frac {1}{n}}\sum _{i=1}^{n}x_{i}}denotes the sample mean of the firstnsamples(x1,…,xn){\displaystyle (x_{1},\dots ,x_{n})},σn2=1n∑i=1n(xi−x¯n)2{\textstyle \sigma _{n}^{2}={\frac {1}{n}}\sum _{i=1}^{n}\left(x_{i}-{\overline {x}}_{n}\right)^{2}}theirbiased sample variance, andsn2=1n−1∑i=1n(xi−x¯n)2{\textstyle s_{n}^{2}={\frac {1}{n-1}}\sum _{i=1}^{n}\left(x_{i}-{\overline {x}}_{n}\right)^{2}}theirunbiased sample variance.
These formulas suffer from numerical instability[citation needed], as they repeatedly subtract a small number from a big number which scales withn. A better quantity for updating is the sum of squares of differences from the current mean,∑i=1n(xi−x¯n)2{\textstyle \sum _{i=1}^{n}(x_{i}-{\bar {x}}_{n})^{2}}, here denotedM2,n{\displaystyle M_{2,n}}:
This algorithm was found by Welford,[5][6]and it has been thoroughly analyzed.[2][7]It is also common to denoteMk=x¯k{\displaystyle M_{k}={\bar {x}}_{k}}andSk=M2,k{\displaystyle S_{k}=M_{2,k}}.[8]
An example Python implementation for Welford's algorithm is given below.
This algorithm is much less prone to loss of precision due tocatastrophic cancellation, but might not be as efficient because of the division operation inside the loop. For a particularly robust two-pass algorithm for computing the variance, one can first compute and subtract an estimate of the mean, and then use this algorithm on the residuals.
Theparallel algorithmbelow illustrates how to merge multiple sets of statistics calculated online.
The algorithm can be extended to handle unequal sample weights, replacing the simple counternwith the sum of weights seen so far. West (1979)[9]suggests thisincremental algorithm:
Chan et al.[10]note that Welford's online algorithm detailed above is a special case of an algorithm that works for combining arbitrary setsA{\displaystyle A}andB{\displaystyle B}:
This may be useful when, for example, multiple processing units may be assigned to discrete parts of the input.
Chan's method for estimating the mean is numerically unstable whennA≈nB{\displaystyle n_{A}\approx n_{B}}and both are large, because the numerical error inδ=x¯B−x¯A{\displaystyle \delta ={\bar {x}}_{B}-{\bar {x}}_{A}}is not scaled down in the way that it is in thenB=1{\displaystyle n_{B}=1}case. In such cases, preferx¯AB=nAx¯A+nBx¯BnAB{\textstyle {\bar {x}}_{AB}={\frac {n_{A}{\bar {x}}_{A}+n_{B}{\bar {x}}_{B}}{n_{AB}}}}.
This can be generalized to allow parallelization withAVX, withGPUs, andcomputer clusters, and to covariance.[3]
Assume that all floating point operations use standardIEEE 754 double-precisionarithmetic. Consider the sample (4, 7, 13, 16) from an infinite population. Based on this sample, the estimated population mean is 10, and the unbiased estimate of population variance is 30. Both the naïve algorithm and two-pass algorithm compute these values correctly.
Next consider the sample (108+ 4,108+ 7,108+ 13,108+ 16), which gives rise to the same estimated variance as the first sample. The two-pass algorithm computes this variance estimate correctly, but the naïve algorithm returns 29.333333333333332 instead of 30.
While this loss of precision may be tolerable and viewed as a minor flaw of the naïve algorithm, further increasing the offset makes the error catastrophic. Consider the sample (109+ 4,109+ 7,109+ 13,109+ 16). Again the estimated population variance of 30 is computed correctly by the two-pass algorithm, but the naïve algorithm now computes it as −170.66666666666666. This is a serious problem with naïve algorithm and is due tocatastrophic cancellationin the subtraction of two similar numbers at the final stage of the algorithm.
Terriberry[11]extends Chan's formulae to calculating the third and fourthcentral moments, needed for example when estimatingskewnessandkurtosis:
Here theMk{\displaystyle M_{k}}are again the sums of powers of differences from the mean∑(x−x¯)k{\textstyle \sum (x-{\overline {x}})^{k}}, giving
For the incremental case (i.e.,B={x}{\displaystyle B=\{x\}}), this simplifies to:
By preserving the valueδ/n{\displaystyle \delta /n}, only one division operation is needed and the higher-order statistics can thus be calculated for little incremental cost.
An example of the online algorithm for kurtosis implemented as described is:
Pébaÿ[12]further extends these results to arbitrary-ordercentral moments, for the incremental and the pairwise cases, and subsequently Pébaÿ et al.[13]for weighted and compound moments. One can also find there similar formulas forcovariance.
Choi and Sweetman[14]offer two alternative methods to compute the skewness and kurtosis, each of which can save substantial computer memory requirements and CPU time in certain applications. The first approach is to compute the statistical moments by separating the data into bins and then computing the moments from the geometry of the resulting histogram, which effectively becomes aone-pass algorithmfor higher moments. One benefit is that the statistical moment calculations can be carried out to arbitrary accuracy such that the computations can be tuned to the precision of, e.g., the data storage format or the original measurement hardware. A relative histogram of a random variable can be constructed in the conventional way: the range of potential values is divided into bins and the number of occurrences within each bin are counted and plotted such that the area of each rectangle equals the portion of the sample values within that bin:
whereh(xk){\displaystyle h(x_{k})}andH(xk){\displaystyle H(x_{k})}represent the frequency and the relative frequency at binxk{\displaystyle x_{k}}andA=∑k=1Kh(xk)Δxk{\textstyle A=\sum _{k=1}^{K}h(x_{k})\,\Delta x_{k}}is the total area of the histogram. After this normalization, then{\displaystyle n}raw moments and central moments ofx(t){\displaystyle x(t)}can be calculated from the relative histogram:
where the superscript(h){\displaystyle ^{(h)}}indicates the moments are calculated from the histogram. For constant bin widthΔxk=Δx{\displaystyle \Delta x_{k}=\Delta x}these two expressions can be simplified usingI=A/Δx{\displaystyle I=A/\Delta x}:
The second approach from Choi and Sweetman[14]is an analytical methodology to combine statistical moments from individual segments of a time-history such that the resulting overall moments are those of the complete time-history. This methodology could be used for parallel computation of statistical moments with subsequent combination of those moments, or for combination of statistical moments computed at sequential times.
IfQ{\displaystyle Q}sets of statistical moments are known:(γ0,q,μq,σq2,α3,q,α4,q){\displaystyle (\gamma _{0,q},\mu _{q},\sigma _{q}^{2},\alpha _{3,q},\alpha _{4,q})\quad }forq=1,2,…,Q{\displaystyle q=1,2,\ldots ,Q}, then eachγn{\displaystyle \gamma _{n}}can
be expressed in terms of the equivalentn{\displaystyle n}raw moments:
whereγ0,q{\displaystyle \gamma _{0,q}}is generally taken to be the duration of theqth{\displaystyle q^{th}}time-history, or the number of points ifΔt{\displaystyle \Delta t}is constant.
The benefit of expressing the statistical moments in terms ofγ{\displaystyle \gamma }is that theQ{\displaystyle Q}sets can be combined by addition, and there is no upper limit on the value ofQ{\displaystyle Q}.
where the subscriptc{\displaystyle _{c}}represents the concatenated time-history or combinedγ{\displaystyle \gamma }. These combined values ofγ{\displaystyle \gamma }can then be inversely transformed into raw moments representing the complete concatenated time-history
Known relationships between the raw moments (mn{\displaystyle m_{n}}) and the central moments (θn=E[(x−μ)n]){\displaystyle \theta _{n}=\operatorname {E} [(x-\mu )^{n}])})
are then used to compute the central moments of the concatenated time-history. Finally, the statistical moments of the concatenated history are computed from the central moments:
Very similar algorithms can be used to compute thecovariance.
The naïve algorithm is
For the algorithm above, one could use the following Python code:
As for the variance, the covariance of two random variables is also shift-invariant, so given any two constant valueskx{\displaystyle k_{x}}andky,{\displaystyle k_{y},}it can be written:
and again choosing a value inside the range of values will stabilize the formula against catastrophic cancellation as well as make it more robust against big sums. Taking the first value of each data set, the algorithm can be written as:
The two-pass algorithm first computes the sample means, and then the covariance:
The two-pass algorithm may be written as:
A slightly more accurate compensated version performs the full naive algorithm on the residuals. The final sums∑ixi{\textstyle \sum _{i}x_{i}}and∑iyi{\textstyle \sum _{i}y_{i}}shouldbe zero, but the second pass compensates for any small error.
A stable one-pass algorithm exists, similar to the online algorithm for computing the variance, that computes co-momentCn=∑i=1n(xi−x¯n)(yi−y¯n){\textstyle C_{n}=\sum _{i=1}^{n}(x_{i}-{\bar {x}}_{n})(y_{i}-{\bar {y}}_{n})}:
The apparent asymmetry in that last equation is due to the fact that(xn−x¯n)=n−1n(xn−x¯n−1){\textstyle (x_{n}-{\bar {x}}_{n})={\frac {n-1}{n}}(x_{n}-{\bar {x}}_{n-1})}, so both update terms are equal ton−1n(xn−x¯n−1)(yn−y¯n−1){\textstyle {\frac {n-1}{n}}(x_{n}-{\bar {x}}_{n-1})(y_{n}-{\bar {y}}_{n-1})}. Even greater accuracy can be achieved by first computing the means, then using the stable one-pass algorithm on the residuals.
Thus the covariance can be computed as
A small modification can also be made to compute the weighted covariance:
Likewise, there is a formula for combining the covariances of two sets that can be used to parallelize the computation:[3]
A version of the weighted online algorithm that does batched updated also exists: letw1,…wN{\displaystyle w_{1},\dots w_{N}}denote the weights, and write
The covariance can then be computed as
|
https://en.wikipedia.org/wiki/Algorithms_for_calculating_variance
|
Instatistics, theresidual sum of squares(RSS), also known as thesum of squared residuals(SSR) or thesum of squared estimate of errors(SSE), is thesumof thesquaresofresiduals(deviations predicted from actual empirical values of data). It is a measure of the discrepancy between the data and an estimation model, such as alinear regression. A small RSS indicates a tight fit of the model to the data. It is used as anoptimality criterionin parameter selection andmodel selection.
In general,total sum of squares=explained sum of squares+ residual sum of squares. For a proof of this in the multivariateordinary least squares(OLS) case, seepartitioning in the general OLS model.
In a model with a single explanatory variable, RSS is given by:[1]
whereyiis theithvalue of the variable to be predicted,xiis theithvalue of the explanatory variable, andf(xi){\displaystyle f(x_{i})}is the predicted value ofyi(also termedyi^{\displaystyle {\hat {y_{i}}}}).
In a standard linear simpleregression model,yi=α+βxi+εi{\displaystyle y_{i}=\alpha +\beta x_{i}+\varepsilon _{i}\,}, whereα{\displaystyle \alpha }andβ{\displaystyle \beta }arecoefficients,yandxare theregressandand theregressor, respectively, and ε is theerror term. The sum of squares of residuals is the sum of squares ofε^i{\displaystyle {\widehat {\varepsilon \,}}_{i}}; that is
whereα^{\displaystyle {\widehat {\alpha \,}}}is the estimated value of the constant termα{\displaystyle \alpha }andβ^{\displaystyle {\widehat {\beta \,}}}is the estimated value of the slope coefficientβ{\displaystyle \beta }.
The general regression model withnobservations andkexplanators, the first of which is a constant unit vector whose coefficient is the regression intercept, is
whereyis ann× 1 vector of dependent variable observations, each column of then×kmatrixXis a vector of observations on one of thekexplanators,β{\displaystyle \beta }is ak× 1 vector of true coefficients, andeis ann× 1 vector of the true underlying errors. Theordinary least squaresestimator forβ{\displaystyle \beta }is
The residual vectore^=y−Xβ^=y−X(XTX)−1XTy{\displaystyle {\hat {e}}=y-X{\hat {\beta }}=y-X(X^{\operatorname {T} }X)^{-1}X^{\operatorname {T} }y}; so the residual sum of squares is:
(equivalent to the square of thenormof residuals). In full:
whereHis thehat matrix, or the projection matrix in linear regression.
Theleast-squares regression lineis given by
whereb=y¯−ax¯{\displaystyle b={\bar {y}}-a{\bar {x}}}anda=SxySxx{\displaystyle a={\frac {S_{xy}}{S_{xx}}}}, whereSxy=∑i=1n(x¯−xi)(y¯−yi){\displaystyle S_{xy}=\sum _{i=1}^{n}({\bar {x}}-x_{i})({\bar {y}}-y_{i})}andSxx=∑i=1n(x¯−xi)2.{\displaystyle S_{xx}=\sum _{i=1}^{n}({\bar {x}}-x_{i})^{2}.}
Therefore,
whereSyy=∑i=1n(y¯−yi)2.{\displaystyle S_{yy}=\sum _{i=1}^{n}({\bar {y}}-y_{i})^{2}.}
ThePearson product-moment correlationis given byr=SxySxxSyy;{\displaystyle r={\frac {S_{xy}}{\sqrt {S_{xx}S_{yy}}}};}therefore,RSS=Syy(1−r2).{\displaystyle \operatorname {RSS} =S_{yy}(1-r^{2}).}
|
https://en.wikipedia.org/wiki/Residual_sum_of_squares
|
Ineconometricsand other applications of multivariatetime series analysis, avariance decompositionorforecast error variance decomposition(FEVD) is used to aid in the interpretation of avector autoregression(VAR) model once it has been fitted.[1]Thevariancedecomposition indicates the amount of information each variable contributes to the other variables in the autoregression. It determines how much of the forecast error variance of each of the variables can be explained by exogenous shocks to the other variables.
For the VAR (p) of form
This can be changed to a VAR(1) structure by writing it in companion form (see general matrix notation of a VAR(p))
whereyt{\displaystyle y_{t}},ν{\displaystyle \nu }andu{\displaystyle u}arek{\displaystyle k}dimensional column vectors,A{\displaystyle A}iskp{\displaystyle kp}bykp{\displaystyle kp}dimensional matrix andY{\displaystyle Y},V{\displaystyle V}andU{\displaystyle U}arekp{\displaystyle kp}dimensional column vectors.
Themean squared errorof the h-step forecast of variablej{\displaystyle j}is
and where
The amount of forecast error variance of variablej{\displaystyle j}accounted for by exogenous shocks to variablel{\displaystyle l}is given byωjl,h,{\displaystyle \omega _{jl,h},}
|
https://en.wikipedia.org/wiki/Variance_decomposition_of_forecast_errors
|
Thehuman brainanatomicalregions are ordered following standardneuroanatomyhierarchies.Functional,connective, anddevelopmentalregions are listed in parentheses where appropriate.
Other areas that have been included in the limbic system include the:
2° (Spinomesencephalic tract→Superior colliculusofMidbrain tectum)
|
https://en.wikipedia.org/wiki/List_of_regions_in_the_human_brain
|
Neural engineering(also known asneuroengineering) is a discipline withinbiomedical engineeringthat uses engineering techniques to understand, repair, replace, or enhance neural systems. Neural engineers are uniquely qualified to solve design problems at the interface of living neural tissue and non-living constructs.[1]
The field of neural engineering draws on the fields ofcomputational neuroscience, experimental neuroscience,neurology,electrical engineeringandsignal processingof living neural tissue, and encompasses elements fromrobotics,cybernetics,computer engineering,neural tissue engineering,materials science, andnanotechnology.
Prominent goals in the field include restoration andaugmentationof human function via direct interactions between the nervous system andartificial devices.
Much current research is focused on understanding the coding and processing of information in thesensoryandmotorsystems, quantifying how this processing is altered in thepathologicalstate, and how it can be manipulated through interactions with artificial devices includingbrain–computer interfacesandneuroprosthetics.
Other research concentrates more on investigation by experimentation, including the use ofneural implantsconnected with external technology.
Neurohydrodynamicsis a division of neural engineering that focuses onhydrodynamicsof the neurological system.
The origins of neural engineering begins with Italian physicist and biologistLuigi Galvani. Galvani along with pioneers likeEmil du Bois-Reymonddiscovered that electrical signals in nerves and muscles control movement, which marks the first understanding of the brain's electrical nature.[2][3]As neural engineering is a relatively new field, information and research relating to it is comparatively limited, although this is changing rapidly. The first journals specifically devoted to neural engineering,The Journal of Neural EngineeringandThe Journal of NeuroEngineering and Rehabilitationboth emerged in 2004. International conferences on neural engineering have been held by the IEEE since 2003, from 29 April until 2 May 2009 in Antalya, Turkey 4th Conference on Neural Engineering,[4]the 5th International IEEE EMBS Conference on Neural Engineering in April/May 2011 inCancún,Mexico, and the 6th conference inSan Diego,Californiain November 2013. The 7th conference was held in April 2015 inMontpellier. The 8th conference was held in May 2017 inShanghai. In 2003 one of the defining talks of the conference, given by Dr. Carol Lucas, the biomedical program director of theNational Science Foundationat the time, provided insights into the future of neural engineering and neuroscience initiatives. Her talk covered over 200 papers spanning an array of topics, including neural informatics, behavioral dynamics, and brain imaging. This was the fundamental base work for future research regarding neural engineering.[5]Another milestone in the development of neuroengineering was identified in 2024 by introducing the notion of mother-fetus neurocognitive mode.[6][7][8]Because it explains nonlocal interactions of bio-systems, this field of knowledge opens up new horizons for applying engineering methods in the repair, replacement, and enhance neural systems. This knowledge provides a new approach tononinvasivebrain-machine interaction and integration.[7][8]
The core principles of neuroengineering revolve around understanding the interplay among neurons, neural networks, and the functions of the nervous system to create measurable models that facilitate the creation of devices capable of interpreting and controlling signals to generate meaningful responses. The primary focus of progress in this field lies in constructing theoretical models that mimic entire biological systems or their functional components found in nature. The central objective of this technological advancement phase is the integration of machinery with the nervous system. Progress in this area enables the monitoring and modulation of neural activity. For instance, because the mother-fetus interactions enable the child's nervous system to evolve with adequate biological sentience and provide first achievements in the cognitive development,[7][8]studying the mother-fetus neurocognitive model paves the way to design noninvasive computer management by the brain[7]and medical devices for noninvasive treatment of injured nervous systems.[8]
Messages that the body uses to influence thoughts, senses, movements, and survival are directed by nerve impulses transmitted across brain tissue and to the rest of the body.Neuronsare the basic functional unit of the nervous system and are highly specialized cells that are capable of sending these signals that operate high and low level functions needed for survival and quality of life. Neurons have special electro-chemical properties that allow them to process information and then transmit that information to other cells. Neuronal activity is dependent upon neural membrane potential and the changes that occur along and across it. A constant voltage, known as themembrane potential, is normally maintained by certain concentrations of specific ions across neuronal membranes. Disruptions or variations in this voltage create an imbalance, or polarization, across the membrane.Depolarizationof the membrane past itsthreshold potentialgenerates an action potential, which is the main source of signal transmission, known asneurotransmissionof the nervous system. Anaction potentialresults in a cascade of ion flux down and across an axonal membrane, creating an effective voltage spike train or "electrical signal" which can transmit further electrical changes in other cells. Signals can be generated by electrical, chemical, magnetic, optical, and other forms of stimuli that influence the flow of charges, and thus voltage levels across neural membranes.[9][pages needed]
Engineers employ quantitative tools that can be used for understanding and interacting with complex neural systems. Methods of studying and generating chemical, electrical, magnetic, and optical signals responsible for extracellular field potentials and synaptic transmission in neural tissue aid researchers in the modulation of neural system activity.[10]To understand properties of neural system activity, engineers use signal processing techniques and computational modeling.[11]To process these signals, neural engineers must translate the voltages across neural membranes into corresponding code, a process known as neural coding.Neural codingstudies on how the brain encodes simple commands in the form of central pattern generators (CPGs), movement vectors, the cerebellar internal model, and somatotopic maps to understand movement and sensory phenomena. Decoding of these signals in the realm ofneuroscienceis the process by which neurons understand the voltages that have been transmitted to them. Transformations involve the mechanisms that signals of a certain form get interpreted and then translated into another form. Engineers look to mathematically model these transformations.[11]There are a variety of methods being used to record these voltage signals. These can be intracellular or extracellular. Extracellular methods involve single-unit recordings,extracellular field potentials, and amperometry; more recently,multielectrode arrayshave been used to record and mimic signals.
Neuromechanicsis the coupling of neurobiology, biomechanics, sensation and perception, and robotics.[12]Researchers are using advanced techniques and models to study the mechanical properties of neural tissues and their effects on the tissues' ability to withstand and generate force and movements as well as their vulnerability to traumatic loading.[13]This area of research focuses on translating the transformations of information among the neuromuscular and skeletal systems to develop functions and governing rules relating to operation and organization of these systems.[14]Neuromechanics can be simulated by connecting computational models of neural circuits to models of animal bodies situated in virtual physical worlds.[12]Experimental analysis of biomechanics including the kinematics and dynamics of movements, the process and patterns of motor and sensory feedback during movement processes, and the circuit and synaptic organization of the brain responsible for motor control are all currently being researched to understand the complexity of animal movement. Dr. Michelle LaPlaca's lab at Georgia Institute of Technology is involved in the study of mechanical stretch of cell cultures, shear deformation of planar cell cultures, and shear deformation of 3D cell containing matrices. Understanding of these processes is followed by development of functioning models capable of characterizing these systems under closed loop conditions with specially defined parameters. The study of neuromechanics is aimed at improving treatments for physiological health problems which includes optimization of prostheses design, restoration of movement post injury, and design and control of mobile robots. By studying structures in 3D hydrogels, researchers can identify new models of nerve cell mechanoproperties. For example, LaPlaca et al. developed a new model showing that strain may play a role in cell culture.[15]
Neuromodulationin medicine (known asneurotherapy) aims to treat disease or injury by employing medical device technologies that would enhance or suppress activity of the nervous system with the delivery of pharmaceutical agents, electrical signals, or other forms of energy stimulus to re-establish balance in impaired regions of the brain. Five neuromodulation domains constitute this subfield of neural engineering that uses engineering techniques to repair or enhance neural system activity: "light therapy", "photobiomodulation", a group of techniques within "transcranial electric current" and "transcranial magnetic field" stimulations, "acoustic photonic intellectual neurostimulation" (APIN), "low-frequency sound stimulations", including "vibroacoustic therapy" and "rhythmic auditory stimulation".[8]A review of scientific literature (2024) identifies hypotheses on etiology of different non-invasive neuromodulation techniques.[8]The analysis of these data and the mother-fetus neurocognitive model give insight into the origin of natural neuromodulation during pregnancy.[8]
Researchers in this field face the challenge of linking advances in understanding neural signals to advancements in technologies delivering and analyzing these signals with increased sensitivity, biocompatibility, and viability in closed loops schemes in the brain such that new treatments and clinical applications can be created to treat those with neural damage of various kinds.[16]Neuromodulator devices can correct nervous system dysfunction related to Parkinson's disease, dystonia, tremor, Tourette's, chronic pain, OCD, severe depression, and eventually epilepsy.[16]Neuromodulation is appealing as treatment for varying defects because it focuses in on treating highly specific regions of the brain only, contrasting that of systemic treatments that can have side effects on the body. Neuromodulator stimulators such as microelectrode arrays can stimulate and record brain function and with further improvements are meant to become adjustable and responsive delivery devices for drugs and other stimuli.[17]
Neural engineering and rehabilitation applies neuroscience and engineering to investigating peripheral and central nervous system function and to finding clinical solutions to problems created by brain damage or malfunction. Engineering applied toneuroregenerationfocuses on engineering devices and materials that facilitate the growth of neurons for specific applications such as the regeneration of peripheral nerve injury, the regeneration of the spinal cord tissue for spinal cord injury, and the regeneration of retinal tissue.Genetic engineeringandtissue engineeringare areas developing scaffolds for spinal cord to regrow across thus helping neurological problems.[16][18]
Research focused on neural engineering utilizes devices to study how the nervous system functions and malfunctions.[18]
Neuroimagingtechniques are used to investigate the activity of neural networks, as well as the structure and function of the brain. Neuroimaging technologies includefunctional magnetic resonance imaging(fMRI),magnetic resonance imaging(MRI),positron emission tomography(PET) andcomputed axial tomography(CAT) scans. Functional neuroimaging studies are interested in which areas of the brain perform specific tasks. fMRI measures hemodynamic activity that is closely linked to neural activity. It is used to map metabolic responses in specific regions of the brain to a given task or stimulus. PET, CT scanners, andelectroencephalography(EEG) are currently being improved and used for similar purposes.[16]
Scientists can use experimental observations of neuronal systems and theoretical and computational models of these systems to createNeural networkswith the hopes of modeling neural systems in as realistic a manner as possible. Neural networks can be used for analyses to help design further neurotechnological devices. Specifically, researchers handle analytical or finite element modeling to determine nervous system control of movements and apply these techniques to help patients with brain injuries or disorders.Artificial neural networkscan be built from theoretical and computational models and implemented on computers from theoretically devices equations or experimental results of observed behavior of neuronal systems. Models might represent ion concentration dynamics, channel kinetics, synaptic transmission, single neuron computation, oxygen metabolism, or application of dynamic system theory.[15]Liquid-based template assembly was used to engineer 3D neural networks from neuron-seeded microcarrier beads.[19]
Neural interfacesare a major element used for studying neural systems and enhancing or replacing neuronal function with engineered devices. Engineers are challenged with developing electrodes that can selectively record from associated electronic circuits to collect information about the nervous system activity and to stimulate specified regions of neural tissue to restore function or sensation of that tissue (Cullen et al. 2011). The materials used for these devices must match the mechanical properties of neural tissue in which they are placed and the damage must be assessed. Neural interfacing involves temporary regeneration of biomaterial scaffolds or chronic electrodes and must manage the body'sresponse to foreign materials.[20]Microelectrode arrays are recent advances that can be used to study neural networks (Cullen & Pfister 2011). Optical neural interfaces involveoptical recordingsandoptogenetics, making certain brain cells sensitive to light in order to modulate their activity.Fiber opticscan be implanted in the brain to stimulate or silence targeted neurons using light, as well as record photon activity—aproxyof neural activity— instead of using electrodes.Two-photon excitation microscopycan study living neuronal networks and the communicatory events among neurons.[16]
Brain–computer interfacesseek to directly communicate with human nervous system to monitor and stimulate neural circuits as well as diagnose and treat intrinsic neurological dysfunction.Deep brain stimulationis a significant advance in this field that is especially effective in treating movement disorders such as Parkinson's disease with high frequency stimulation of neural tissue to suppress tremors (Lega et al. 2011).
Neural microsystems can be developed to interpret and deliver electrical, chemical, magnetic, and optical signals to neural tissue. They can detect variations in membrane potential and measure electrical properties such as spike population, amplitude, or rate by using electrodes, or by assessment of chemical concentrations, fluorescence light intensity, or magnetic field potential. The goal of these systems is to deliver signals that would influence neuronal tissue potential and thus stimulate the brain tissue to evoke a desired response.[9]
Microelectrode arraysare specific tools used to detect the sharp changes in voltage in the extracellular environments that occur from propagation of an action potential down an axon. Dr. Mark Allen and Dr. LaPlaca have microfabricated 3D electrodes out of cytocompatible materials such as SU-8 and SLA polymers which have led to the development of in vitro and in vivo microelectrode systems with the characteristics of high compliance and flexibility to minimize tissue disruption.
Neuroprostheticsare devices capable of supplementing or replacing missing functions of the nervous system by stimulating the nervous system and recording its activity. Electrodes that measure firing of nerves can integrate with prosthetic devices and signal them to perform the function intended by the transmitted signal. Sensory prostheses use artificial sensors to replace neural input that might be missing from biological sources.[9]Engineers researching these devices are charged with providing a chronic, safe, artificial interface with neuronal tissue. Perhaps the most successful of these sensory prostheses is thecochlear implantwhich has restored hearing abilities to the deaf.Visual prosthesisfor restoring visual capabilities of blind persons is still in more elementary stages of development. Motor prosthetics are devices involved with electrical stimulation of biological neural muscular system that can substitute for control mechanisms of the brain or spinal cord. Smart prostheses can be designed to replace missing limbs controlled by neural signals by transplanting nerves from the stump of an amputee to muscles. Sensory prosthetics provide sensory feedback by transforming mechanical stimuli from the periphery into encoded information accessible by the nervous system.[21]Electrodes placed on the skin can interpret signals and then control the prosthetic limb. These prosthetics have been very successful.Functional electrical stimulation(FES) is a system aimed at restoring motor processes such as standing, walking, and hand grasp.[16]
Neuroroboticsis the study of how neural systems can be embodied and movements emulated in mechanical machines. Neurorobots are typically used to studymotor controland locomotion, learning and memory selection, and value systems and action selection. By studying neurorobots in real-world environments, they are more easily observed and assessed to describe heuristics of robot function in terms of its embedded neural systems and the reactions of these systems to its environment.[22]For instance, making use of a computational model of epilectic spike-wave dynamics, it has been already proven the effectiveness of a method to simulate seizure abatement through a pseudospectral protocol. The computational model emulates the brain connectivity by using a magnetic imaging resonance from a patient with idiopathic generalized epilepsy. The method was able to generate stimuli able to lessen the seizures.
Neural tissue regeneration, orneuroregenerationlooks to restore function to those neurons that have been damaged in small injuries and larger injuries like those caused by traumatic brain injury. Functional restoration of damaged nerves involves re-establishment of a continuous pathway for regenerating axons to the site of innervation. Researchers like Dr. LaPlaca at Georgia Institute of Technology are looking to help find treatment for repair and regeneration aftertraumatic brain injuryandspinal cord injuriesby applying tissue engineering strategies. Dr. LaPlaca is looking into methods combining neural stem cells with an extracellular matrix protein based scaffold for minimally invasive delivery into the irregular shaped lesions that form after a traumatic insult. By studying the neural stem cells in vitro and exploring alternative cell sources, engineering novel biopolymers that could be utilized in a scaffold, and investigating cell or tissue engineered construct transplants in vivo in models of traumatic brain and spinal cord injury, Dr. LaPlaca's lab aims to identify optimal strategies for nerve regeneration post injury.
End to end surgical suture of damaged nerve ends can repair small gaps with autologous nerve grafts. For larger injuries, an autologous nerve graft that has been harvested from another site in the body might be used, though this process is time-consuming, costly and requires two surgeries.[18]Clinical treatment for CNS is minimally available and focuses mostly on reducing collateral damage caused by bone fragments near the site of injury or inflammation. After swelling surrounding injury lessens, patients undergo rehabilitation so that remaining nerves can be trained to compensate for the lack of nerve function in injured nerves. No treatment currently exists to restore nerve function of CNS nerves that have been damaged.
Engineering strategies for the repair of spinal cord injury are focused on creating a friendly environment for nerve regeneration. OnlyPeripheral PNS nervedamage has been clinically possible so far, but advances in research of genetic techniques and biomaterials demonstrate the potential for SC nerves to regenerate in permissible environments.
Advantages ofautologoustissue graftsare that they come from natural materials which have a high likelihood of biocompatibility while providing structural support to nerves that encourage cell adhesion and migration.[18]Nonautologous tissue, acellular grafts, and extracellular matrix based materials are all options that may also provide ideal scaffolding fornerve regeneration. Some come fromallogenicorxenogenictissues that must be combined withimmunosuppressants. while others include small intestinalsubmucosaand amniotic tissue grafts. Synthetic materials are attractive options because their physical and chemical properties can typically be controlled. A challenge that remains with synthetic materials isbiocompatibility.Methylcellulose-based constructs have been shown to be a biocompatible option serving this purpose.[23]AxoGenuses a cell graft technology AVANCE to mimic a human nerve. It has been shown to achieve meaningful recovery in 87 percent of patients with peripheral nerve injuries.[24]
Nerve guidance channels,Nerve guidance conduitare innovative strategies focusing on larger defects that provide a conduit for sprouting axons directing growth and reducing growth inhibition from scar tissue. Nerve guidance channels must be readily formed into a conduit with the desired dimensions, sterilizable, tear resistant, and easy to handle and suture.[18]Ideally they would degrade over time with nerve regeneration, be pliable, semipermeable, maintain their shape, and have a smooth inner wall that mimics that of a real nerve.
Highly controlled delivery systems are needed to promoteneural regeneration.Neurotrophic factorscan influence development, survival, outgrowth, and branching. Neurotrophins includenerve growth factor(NGF),brain derived neurotrophic factor(BDNF),neurotrophin-3(NT-3) andneurotrophin-4/5(NT-4/5). Other factors areciliary neurotrophic factor(CNTF),glial cell line-derived growth factor(GDNF) andacidic and basic fibroblast growth factor(aFGF, bFGF) that promote a range of neural responses.[18]Fibronectinhas also been shown to support nerve regeneration following TBI in rats.[25]Other therapies are looking into regeneration of nerves by upregulatingregeneration associated genes(RAGs), neuronal cytoskeletal components, andantiapoptosis factors. RAGs include GAP-43 and Cap-23,adhesion moleculessuch asL1 family,NCAM, andN-cadherin.
There is also the potential for blocking inhibitory biomolecules in the CNS due to glial scarring. Some currently being studied are treatments withchondroitinaseABC and blocking NgR, ADP-ribose.[18]
Delivery devices must be biocompatible and stable in vivo. Some examples include osmotic pumps, silicone reservoirs, polymer matrices, and microspheres. Gene therapy techniques have also been studied to provide long-term production of growth factors and could be delivered with viral or non-viral vectors such as lipoplexes. Cells are also effective delivery vehicles for ECM components, neurotrophic factors and cell adhesion molecules. Olfactory ensheathing cells (OECs) and stem cells as well as genetically modified cells have been used as transplants to support nerve regeneration.[15][18][25]
Advanced therapies combine complex guidance channels and multiple stimuli that focus on internal structures that mimic the nerve architecture containing internal matrices of longitudinally aligned fibers or channels. Fabrication of these structures can use a number of technologies: magnetic polymer fiber alignment, injection molding, phase separation, solid free-form fabrication, and ink jet polymer printing.[18]
Augmentation of human neural systems, orhuman enhancementusing engineering techniques is another possible application of neuroengineering. Deep brain stimulation has already been shown to enhance memory recall as noted by patients currently using this treatment for neurological disorders. Brain stimulation techniques are postulated to be able to sculpt emotions and personalities as well as enhance motivation, reduce inhibitions, etc. as requested by the individual. Ethical issues with this sort of human augmentation are a new set of questions that neural engineers have to grapple with as these studies develop.[16]
|
https://en.wikipedia.org/wiki/Neural_engineering
|
Pulse-coupled networksorpulse-coupled neural networks(PCNNs) are neural models proposed by modeling a cat'svisual cortex, and developed for high-performancebiomimeticimage processing.[1]
In 1989, Eckhorn introduced a neural model to emulate the mechanism of cat's visual cortex.[2]The Eckhorn model provided a simple and effective tool for studying smallmammal’s visual cortex, and was soon recognized as having significant application potential in image processing.
In 1994, Johnson adapted the Eckhorn model to an image processingalgorithm, calling this algorithm apulse-coupled neural network.
The basic property of the Eckhorn's linking-field model (LFM) is the coupling term. LFM is a modulation of the primary input by a biased offset factor driven by the linking input. These drive a threshold variable that decays from an initial high value. When the threshold drops below zero it is reset to a high value and the process starts over. This is different than the standard integrate-and-fire neural model, which accumulates the input until it passes an upper limit and effectively "shorts out" to cause the pulse.
LFM uses this difference to sustain pulse bursts, something the standard model does not do on a single neuron level. It is valuable to understand, however, that a detailed analysis of the standard model must include a shunting term, due to the floating voltages level in thedendriticcompartment(s), and in turn this causes an elegant multiple modulation effect that enables a truehigher-order network(HON).[3][4][5]
A PCNN is a two-dimensionalneural network. Eachneuronin the network corresponds to one pixel in an input image, receiving its corresponding pixel's color information (e.g. intensity) as an external stimulus. Each neuron also connects with its neighboring neurons, receiving local stimuli from them. The external and local stimuli are combined in an internal activation system, which accumulates the stimuli until it exceeds a dynamic threshold, resulting in a pulse output. Through iterative computation, PCNN neurons produce temporal series of pulse outputs. The temporal series of pulse outputs contain information of input images and can be used for various image processing applications, such as image segmentation and feature generation. Compared with conventional image processing means, PCNNs have several significant merits, including robustness against noise, independence of geometric variations in input patterns, capability of bridging minor intensity variations in input patterns, etc.
A simplified PCNN called a spikingcortical modelwas developed in 2009.[6]
PCNNs are useful forimage processing, as discussed in a book by Thomas Lindblad and Jason M. Kinser.[7]
PCNNs have been used in a variety of image processing applications, including:image segmentation,pattern recognition,feature generation,face extraction,motion detection,region growing,image denoising[8]andimage enhancement[9]
Multidimensional pulse image processing of chemical structure data using PCNN has been discussed by Kinser, et al.[10]
They have also been applied to an all pairs shortest path problem.[11]
|
https://en.wikipedia.org/wiki/Pulse-coupled_networks
|
Anerve tractis a bundle of nerve fibers (axons) connectingnucleiof thecentral nervous system.[1][2][3]In theperipheral nervous system, this is known as anerve fascicle, and has associatedconnective tissue. The main nerve tracts in the central nervous system are of three types:association fibers,commissural fibers, andprojection fibers. A nerve tract may also be referred to as acommissure,decussation, orneural pathway.[4]A commissure connects the twocerebral hemispheresat the same levels, while a decussation connects at different levels (crosses obliquely).
The nerve fibers in the central nervous system can be categorized into three groups on the basis of their course and connections.[5]Different tracts may also be referred to asprojectionsorradiationssuch asthalamocortical radiations.
The tracts that connect cortical areas within the same hemisphere are calledassociation tracts.[5]Long association fibers connect different lobes of a hemisphere to each other whereas short association fibers connect different gyri within a single lobe. Among their roles, association tracts link perceptual and memory centers of the brain.[6]
Thecingulumis a major association tract. The cingulum forms the white matter core of thecingulate gyrusand links from this to theentorhinal cortex. Another major association tract is thesuperior longitudinal fasciculus(SLF) that has three parts.
Commissural tractsconnect corresponding cortical areas in the two hemispheres.[5]They cross from one cerebral hemisphere to the other through bridges calledcommissures. The great majority of commissural tracts pass through the largest commissure thecorpus callosum. A few tracts pass through the much smalleranteriorandposterior commissures. Commissural tracts enable the left and right sides of the cerebrum to communicate with each other. Other commissures are thehippocampal commissure, and thehabenular commissure.
Projection tractsconnect the cerebral cortex with thecorpus striatum,diencephalon,brainstemand thespinal cord.[5]Thecorticospinal tractfor example, carries motor signals from the cerebrum to the spinal cord. Other projection tracts carry signals upward to the cerebral cortex. Superior to the brainstem, such tracts form a broad, dense sheet called the internal capsule between the thalamus and basal nuclei, then radiate in a diverging, fanlike array to specific areas of the cortex.
2° (Spinomesencephalic tract→Superior colliculusofMidbrain tectum)
|
https://en.wikipedia.org/wiki/Nerve_tract
|
Inneuroanatomy, aneural pathwayis the connection formed byaxonsthat project fromneuronsto makesynapsesonto neurons in another location, to enableneurotransmission(the sending of a signal from one region of thenervous systemto another). Neurons are connected by a single axon, or by a bundle of axons known as anerve tract, orfasciculus.[1]Shorter neural pathways are found withingrey matterin thebrain, whereas longer projections, made up ofmyelinatedaxons, constitutewhite matter.
In thehippocampus, there are neural pathways involved in its circuitry including theperforant pathway, that provides a connectional route from theentorhinal cortex[2]to all fields of thehippocampal formation, including thedentate gyrus, allCA fields(including CA1),[3]and thesubiculum.
Descending motor pathways of thepyramidal tractstravel from thecerebral cortexto thebrainstemor lowerspinal cord.[4][5]Ascendingsensorytracts in thedorsal column–medial lemniscus pathway(DCML) carry information from the periphery to the cortex of the brain.
The first named pathways are evident to the naked eye even in a poorly preservedbrain, and were named by the great anatomists of theRenaissanceusing cadaver material.[citation needed]Examples of these include the greatcommissuresof the brain such as thecorpus callosum(Latin, "hard body"; not to be confused with the Latin word "colossus" – the "huge" statue),anterior commissure, andposterior commissure.[citation needed]Further examples include thepyramidal tract,crus cerebri(Latin, "leg of the brain"), andcerebellar peduncles(Latin, "little foot of thecerebellum").[citation needed]Note that these names describe theappearanceof a structure but give no information on its function, location, etc.[citation needed]
Later, asneuroanatomicalknowledge became more sophisticated, the trend was toward naming pathways by their origin and termination.[citation needed]For example, thenigrostriatal pathwayruns from thesubstantia nigra(Latin, "black substance") to thecorpus striatum(Latin, "striped body").[citation needed]This naming can extend to include any number of structures in a pathway, such that the cerebellorubrothalamocortical pathway originates in thecerebellum,synapsesin thered nucleus("ruber" in Latin), on to thethalamus, and finally terminating in thecerebral cortex.[citation needed]
Sometimes, these two naming conventions coexist. For example, the name "pyramidal tract" has been mainly supplanted bylateral corticospinal tractin most texts.[citation needed]Note that the "old" name was primarily descriptive, evoking thepyramidsof antiquity, from the appearance of this neural pathway in themedulla oblongata.[citation needed]The "new" name is based primarily on its origin (in the primary motorcortex,Brodmann area4) and termination (onto thealpha motor neuronsof thespinal cord).[citation needed]
In thecerebellum, one of the two major pathways is that of themossy fibers. Mossy fibers project directly to thedeep nuclei, but also give rise to the following pathway: mossy fibers → granule cells → parallel fibers → Purkinje cells → deep nuclei. The other main pathway is from theclimbing fibersand these project to Purkinje cells and also send collaterals directly to the deep nuclei.[6]
In general,neuronsreceive information either at theirdendritesorcell bodies. Theaxonof a nerve cell is, in general, responsible for transmitting information over a relatively long distance. Therefore, most neural pathways are made up ofaxons.[citation needed]If theaxonshavemyelinsheaths, then the pathway appears bright white becausemyelinis primarilylipid.[citation needed]If most or all of the axons lackmyelinsheaths (i.e., areunmyelinated), then the pathway will appear a darker beige color, which is generally calledgrey.[citation needed]
Some neurons are responsible for conveying information over long distances. For example,motor neurons, which travel from the spinal cord to the muscle, can have axons up to a meter in length in humans. The longest axon in the human body belongs to the Sciatic Nerve and runs from the greattoeto the base of the spinal cord. These are archetypal examples of neural pathways.[citation needed]
Neural pathways in thebasal gangliain thecortico-basal ganglia-thalamo-cortical loop, are seen as controlling different aspects of behaviour. This regulation is enabled by thedopamine pathways. It has been proposed that the dopamine system of pathways is the overall organiser of the neural pathways that are seen to be parallels of the dopamine pathways.[7]Dopamine is provided bothtonicallyand phasically in response to the needs of the neural pathways.[7]
2° (Spinomesencephalic tract→Superior colliculusofMidbrain tectum)
|
https://en.wikipedia.org/wiki/Neural_pathway
|
Anerve plexusis aplexus(branching network) of intersectingnerves.[1]A nerve plexus is composed of afferent and efferent fibers that arise from the merging of the anterior rami of spinal nerves and blood vessels. There are fivespinal nerveplexuses, except in the thoracic region, as well as other forms ofautonomicplexuses, many of which are a part of theenteric nervous system. The nerves that arise from the plexuses have both sensory and motor functions. These functions include muscle contraction, the maintenance of body coordination and control, and the reaction to sensations such as heat, cold, pain, and pressure. There are several plexuses in the body, including:
The following table shows the nerves that arise from each spinal plexus as well as the spinal level each plexus arises from.
Spinal plexus
Spinal level
Nerves (superior to inferior)
Cervical plexus
C1 – C5
Brachial plexus
C5 – T1
Lumbar plexus
L1 – L4
Sacral plexus
L4, L5, S1 – S4
Coccygeal plexus
S4, S5, Co
Thecervical plexusis formed by the ventral rami of the upper four cervical nerves and the upper part of fifth cervical ventral ramus. The network of rami is located deep to the sternocleidomastoid within the neck. The cervical plexus innervates muscles of the neck and areas of skin on the head, neck and chest. The deep branches innervate muscles, while the superficial branches supply areas of skin. A long branch (primarily of fibers of C4 and with contributions of fibers from C3 and C5;nervus phrenicus) innervates muscles of thediaphragm. The cervical plexus also communicates with thecranial nervesvagus nerveandhypoglossal nerve.
Thebrachial plexusis formed by the ventral rami of C5-C8-T1 spinal nerves, and lower and upper halves of C4 and T2 spinal nerves. The plexus extends toward the armpit. The ventral rami of C5 and C6 form upper trunk, the ventral ramus of C7 forms the middle trunk, and the ventral rami of C8 and T1 join to form the lower trunk of the brachial plexus. Under the clavicle, the trunks reorganize to form cords(fasciculi)around theaxillary artery(arteria axillaris). The lateral cord(fasciculus lateralis)is formed by the upper and middle trunk, all three trunks join to form the posterior cord(fasciculus posterior), the lower trunk continues to the medial trunk(fasciculus medialis). Thenerves(containingmotorandsensoryfibers) to theshoulderand to theupper limbemerge from the brachial plexus.
Since thelumbar plexusandsacral plexusare interconnected, they are sometimes referred to as thelumbosacral plexus. Theintercostal nervesthat give rami to the chest and to the upper parts of the abdominal wallefferentmotorinnervation and to thepleuraandperitoneumafferentsensoryinnervation are the only ones that do not originate from a plexus.
The ventral rami of L1–L5 spinal nerves with a contribution of T12 form lumbar plexus. This plexus lies within thepsoas major muscle. Nervi of the plexus serve the skin and the muscles of the lowerabdominal wall, thethighandexternal genitals. The largest nerve of the plexus is thefemoral nerve. It supplies anterior muscles of the thigh and a part of skin distal to theinguinal ligament.
Ventral rami of L4–S3 with parts of L4 and S4 spinal nerves form thesacral plexus. It is located on the posterior wall ofpelvic cavity(pelvis minor). Nervi of the plexus innervate theperineal region,buttocksand thelower limb. The largest nerve of the human body, thesciatic nerve, is the main branch that gives rami to themotorinnervation of themusclesof thethigh, theleg, and thefoot. Common peroneal nerve and its branches innervate some parts of the skin of the foot, the peroneal muscles of the leg, and the dorsal muscles of the foot.
Coccygeal plexusoriginates from the ventral rami of spinal nerves S4, S5, and Co. It is interconnected with the lower part ofsacral plexus. The only nerve of the plexus is thecoccygeal nerve, that serves sensory innervation of the skin in thecoccygeal region.
Autonomic plexuses can contain both sympathetic and parasympathetic neurons.
Thecardiac plexusis located near the aortic arch and the carina of the trachea.
Thepulmonary plexussupplies innervation to the bronchial tree.
The celiac, orsolar plexus, is located around the celiac trunk and contains the celiac ganglia. The solar plexus is the largest autonomic plexus and provides innervation to multiple abdominal and pelvic organs.
Thesuperior mesenteric plexusincludes the superior mesenteric ganglia and is located around the superior mesenteric artery. Theinferior mesenteric plexusincludes the inferior mesenteric ganglia and is located around the inferior mesenteric artery. Together, these plexuses innervate the intestines.
Some other plexuses include thesuperiorandinferior hypogastric plexus,renal plexus,hepatic plexus,splenic plexus,gastric plexus,pancreatic plexus, andtesticular plexus/ovarian plexus.
|
https://en.wikipedia.org/wiki/Nerve_plexus
|
Dual coneandpolar coneare closely related concepts inconvex analysis, a branch ofmathematics.
Thedual coneC*of asubsetCin alinear spaceXover thereals, e.g.Euclidean spaceRn, withdual spaceX*is the set
where⟨y,x⟩{\displaystyle \langle y,x\rangle }is theduality pairingbetweenXandX*, i.e.⟨y,x⟩=y(x){\displaystyle \langle y,x\rangle =y(x)}.
C*is always aconvex cone, even ifCis neitherconvexnor acone.
IfXis atopological vector spaceover the real or complex numbers, then thedual coneof a subsetC⊆Xis the following set of continuous linear functionals onX:
which is thepolarof the set -C.[1]No matter whatCis,C′{\displaystyle C^{\prime }}will be a convex cone.
IfC⊆ {0} thenC′=X′{\displaystyle C^{\prime }=X^{\prime }}.
Alternatively, many authors define the dual cone in the context of a realHilbert space(such asRnequipped with the Euclidean inner product) to be what is sometimes called theinternal dual cone.
Using this latter definition forC*, we have that whenCis a cone, the following properties hold:[2]
A coneCin a vector spaceXis said to beself-dualifXcan be equipped with aninner product⟨⋅,⋅⟩ such that the internal dual cone relative to this inner product is equal toC.[3]Those authors who define the dual cone as the internal dual cone in a real Hilbert space usually say that a cone is self-dual if it is equal to its internal dual.
This is slightly different from the above definition, which permits a change of inner product.
For instance, the above definition makes a cone inRnwith ellipsoidal base self-dual, because the inner product can be changed to make the base spherical, and a cone with spherical base inRnis equal to its internal dual.
The nonnegativeorthantofRnand the space of allpositive semidefinite matricesare self-dual, as are the cones with ellipsoidal base (often called "spherical cones", "Lorentz cones", or sometimes "ice-cream cones").
So are all cones inR3whose base is the convex hull of a regular polygon with an odd number of vertices.
A less regular example is the cone inR3whose base is the "house": the convex hull of a square and a point outside the square forming an equilateral triangle (of the appropriate height) with one of the sides of the square.
For a setCinX, thepolar coneofCis the set[4]
It can be seen that the polar cone is equal to the negative of the dual cone, i.e.Co= −C*.
For a closed convex coneCinX, the polar cone is equivalent to thepolar setforC.[5]
|
https://en.wikipedia.org/wiki/Dual_cone
|
Inmathematics,Farkas' lemmais a solvability theorem for a finitesystemoflinear inequalities. It was originally proven by the Hungarian mathematicianGyula Farkas.[1]Farkas'lemmais the key result underpinning thelinear programmingduality and has played a central role in the development ofmathematical optimization(alternatively,mathematical programming). It is used amongst other things in the proof of theKarush–Kuhn–Tucker theoreminnonlinear programming.[2]Remarkably, in the area of the foundations of quantum theory, the lemma also underlies the complete set ofBell inequalitiesin the form of necessary and sufficient conditions for the existence of alocal hidden-variable theory, given data from any specific set of measurements.[3]
Generalizations of the Farkas' lemma are about the solvability theorem for convex inequalities,[4]i.e., infinite system of linear inequalities. Farkas' lemma belongs to a class of statements called "theorems of the alternative": a theorem stating that exactly one of two systems has a solution.[5]
There are a number of slightly different (but equivalent) formulations of the lemma in the literature. The one given here is due to Gale, Kuhn and Tucker (1951).[6]
Farkas' lemma—LetA∈Rm×n{\displaystyle \mathbf {A} \in \mathbb {R} ^{m\times n}}andb∈Rm.{\displaystyle \mathbf {b} \in \mathbb {R} ^{m}.}Then exactly one of the following two assertions is true:
Here, the notationx≥0{\displaystyle \mathbf {x} \geq 0}means that all components of the vectorx{\displaystyle \mathbf {x} }are nonnegative.
Letm,n= 2,A=[6430],{\displaystyle \mathbf {A} ={\begin{bmatrix}6&4\\3&0\end{bmatrix}},}andb=[b1b2].{\displaystyle \mathbf {b} ={\begin{bmatrix}b_{1}\\b_{2}\end{bmatrix}}.}The lemma says that exactly one of the following two statements must be true (depending onb1andb2):
Here is a proof of the lemma in this special case:
Consider theclosedconvex coneC(A){\displaystyle C(\mathbf {A} )}spanned by the columns ofA; that is,
Observe thatC(A){\displaystyle C(\mathbf {A} )}is the set of the vectorsbfor which the first assertion in the statement of Farkas' lemma holds. On the other hand, the vectoryin the second assertion is orthogonal to ahyperplanethat separatesbandC(A).{\displaystyle C(\mathbf {A} ).}The lemma follows from the observation thatbbelongs toC(A){\displaystyle C(\mathbf {A} )}if and only ifthere is no hyperplane that separates it fromC(A).{\displaystyle C(\mathbf {A} ).}
More precisely, leta1,…,an∈Rm{\displaystyle \mathbf {a} _{1},\dots ,\mathbf {a} _{n}\in \mathbb {R} ^{m}}denote the columns ofA. In terms of these vectors, Farkas' lemma states that exactly one of the following two statements is true:
The sumsx1a1+⋯+xnan{\displaystyle x_{1}\mathbf {a} _{1}+\dots +x_{n}\mathbf {a} _{n}}with nonnegative coefficientsx1,…,xn{\displaystyle x_{1},\dots ,x_{n}}form the cone spanned by the columns ofA. Therefore, the first statement tells thatbbelongs toC(A).{\displaystyle C(\mathbf {A} ).}
The second statement tells that there exists a vectorysuch that the angle ofywith the vectorsaiis at most 90°, while the angle ofywith the vectorbis more than 90°. The hyperplane normal to this vector has the vectorsaion one side and the vectorbon the other side. Hence, this hyperplane separates the cone spanned bya1,…,an{\displaystyle \mathbf {a} _{1},\dots ,\mathbf {a} _{n}}from the vectorb.
For example, letn,m= 2,a1= (1, 0)T, anda2= (1, 1)T. The convex cone spanned bya1anda2can be seen as a wedge-shaped slice of the first quadrant in thexyplane. Now, supposeb= (0, 1). Certainly,bis not in the convex conea1x1+a2x2. Hence, there must be a separating hyperplane. Lety= (1, −1)T. We can see thata1·y= 1,a2·y= 0, andb·y= −1. Hence, the hyperplane with normalyindeed separates the convex conea1x1+a2x2fromb.
A particularly suggestive and easy-to-remember version is the following: if a set of linear inequalities has no solution, then a contradiction can be produced from it by linear combination with nonnegative coefficients. In formulas: ifAx≤b{\displaystyle \mathbf {Ax} \leq \mathbf {b} }is unsolvable theny⊤A=0,{\displaystyle \mathbf {y} ^{\top }\mathbf {A} =0,}y⊤b=−1,{\displaystyle \mathbf {y} ^{\top }\mathbf {b} =-1,}y≥0{\displaystyle \mathbf {y} \geq 0}has a solution.[7]Note thaty⊤A{\displaystyle \mathbf {y} ^{\top }\mathbf {A} }is a combination of the left-hand sides,y⊤b{\displaystyle \mathbf {y} ^{\top }\mathbf {b} }a combination of the right-hand side of the inequalities. Since the positive combination produces a zero vector on the left and a −1 on the right, the contradiction is apparent.
Thus, Farkas' lemma can be viewed as a theorem oflogical completeness:Ax≤b{\displaystyle \mathbf {Ax} \leq \mathbf {b} }is a set of "axioms", the linear combinations are the "derivation rules", and the lemma says that, if the set of axioms is inconsistent, then it can be refuted using the derivation rules.[8]: 92–94
Farkas' lemma implies that thedecision problem"Given asystem of linear equations, does it have a non-negative solution?" is in the intersection ofNPandco-NP. This is because, according to the lemma, both a "yes" answer and a "no" answer have a proof that can be verified in polynomial time. The problems in the intersectionNP∩coNP{\displaystyle NP\cap coNP}are also calledwell-characterized problems. It is a long-standing open question whetherNP∩coNP{\displaystyle NP\cap coNP}is equal toP. In particular, the question of whether a system of linear equations has a non-negative solution was not known to be in P, until it was proved using theellipsoid method.[9]: 25
The Farkas Lemma has several variants with different sign constraints (the first one is the original version):[8]: 92
The latter variant is mentioned for completeness; it is not actually a "Farkas lemma" since it contains only equalities. Its proof is anexercise in linear algebra.
There are also Farkas-like lemmas forintegerprograms.[9]: 12--14For systems of equations, the lemma is simple:
For system of inequalities, the lemma is much more complicated. It is based on the following tworules of inference:
The lemma says that:
The variants are summarized in the table below.
Generalized Farkas' lemma—LetA∈Rm×n,{\displaystyle \mathbf {A} \in \mathbb {R} ^{m\times n},}b∈Rm,{\displaystyle \mathbf {b} \in \mathbb {R} ^{m},}S{\displaystyle \mathbf {S} }is a closed convex cone inRn,{\displaystyle \mathbb {R} ^{n},}and thedual coneofS{\displaystyle \mathbf {S} }isS∗={z∈Rn∣z⊤x≥0,∀x∈S}.{\displaystyle \mathbf {S} ^{*}=\{\mathbf {z} \in \mathbb {R} ^{n}\mid \mathbf {z} ^{\top }\mathbf {x} \geq 0,\forall \mathbf {x} \in \mathbf {S} \}.}If convex coneC(A)={Ax∣x∈S}{\displaystyle C(\mathbf {A} )=\{\mathbf {A} \mathbf {x} \mid \mathbf {x} \in \mathbf {S} \}}is closed, then exactly one of the following two statements is true:
Generalized Farkas' lemma can be interpreted geometrically as follows: either a vector is in a given closedconvex cone, or there exists ahyperplaneseparating the vector from the cone; there are no other possibilities. The closedness condition is necessary, seeSeparation theorem IinHyperplane separation theorem. For original Farkas' lemma,S{\displaystyle \mathbf {S} }is the nonnegative orthantR+n,{\displaystyle \mathbb {R} _{+}^{n},}hence the closedness condition holds automatically. Indeed, for polyhedral convex cone, i.e., there exists aB∈Rn×k{\displaystyle \mathbf {B} \in \mathbb {R} ^{n\times k}}such thatS={Bx∣x∈R+k},{\displaystyle \mathbf {S} =\{\mathbf {B} \mathbf {x} \mid \mathbf {x} \in \mathbb {R} _{+}^{k}\},}the closedness condition holds automatically. Inconvex optimization, various kinds of constraint qualification, e.g.Slater's condition, are responsible for closedness of the underlying convex coneC(A).{\displaystyle C(\mathbf {A} ).}
By settingS=Rn{\displaystyle \mathbf {S} =\mathbb {R} ^{n}}andS∗={0}{\displaystyle \mathbf {S} ^{*}=\{0\}}in generalized Farkas' lemma, we obtain the following corollary about the solvability for a finite system of linear equalities:
Corollary—LetA∈Rm×n{\displaystyle \mathbf {A} \in \mathbb {R} ^{m\times n}}andb∈Rm.{\displaystyle \mathbf {b} \in \mathbb {R} ^{m}.}Then exactly one of the following two statements is true:
Farkas' lemma can be varied to many further theorems of alternative by simple modifications,[5]such asGordan's theorem: EitherAx<0{\displaystyle \mathbf {Ax} <0}has a solutionx, orA⊤y=0{\displaystyle \mathbf {A} ^{\top }\mathbf {y} =0}has a nonzero solutionywithy≥ 0.
Common applications of Farkas' lemma include proving thestrong duality theorem associated with linear programmingand theKarush–Kuhn–Tucker conditions. An extension of Farkas' lemma can be used to analyze the strong duality conditions for and construct the dual of a semidefinite program. It is sufficient to prove the existence of the Karush–Kuhn–Tucker conditions using theFredholm alternativebut for the condition to be necessary, one must apply von Neumann'sminimax theoremto show the equations derived by Cauchy are not violated.
This is used forDill'sReluplex method for verifying deep neural networks.
|
https://en.wikipedia.org/wiki/Farkas%27s_lemma
|
Inmathematicsandstatistics,deviationserves as a measure to quantify the disparity between anobserved valueof a variable and another designated value, frequently the mean of that variable. Deviations with respect to thesample meanand thepopulation mean(or "true value") are callederrorsandresiduals, respectively. Thesignof the deviation reports the direction of that difference: the deviation is positive when the observed value exceeds the reference value. Theabsolute valueof the deviation indicates the size or magnitude of the difference. In a givensample, there are as many deviations assample points.Summary statisticscan be derived from a set of deviations, such as thestandard deviationand themean absolute deviation, measures ofdispersion, and themean signed deviation, a measure ofbias.[1]
The deviation of each data point is calculated by subtracting the mean of the data set from the individual data point. Mathematically, the deviationdof a data pointxin a data set with respect to the meanmis given by the difference:
This calculation represents the "distance" of a data point from the mean and provides information about how much individual values vary from the average. Positive deviations indicate values above the mean, while negative deviations indicate values below the mean.[1]
The sum of squared deviations is a key component in the calculation ofvariance, another measure of the spread or dispersion of a data set. Variance is calculated by averaging the squared deviations. Deviation is a fundamental concept in understanding the distribution and variability of data points in statistical analysis.[1]
A deviation that is a difference between an observed value and thetrue valueof a quantity of interest (wheretrue valuedenotes the Expected Value, such as the population mean) is an error.[2]
A deviation that is the difference between the observed value and an estimate of the true value (e.g. the sample mean) is aresidual. These concepts are applicable for data at theintervalandratiolevels of measurement.[3]
Di=|xi−m(X)|,{\displaystyle D_{i}=|x_{i}-m(X)|,}where
The average absolute deviation (AAD) in statistics is a measure of the dispersion or spread of a set of data points around a central value, usually the mean or median. It is calculated by taking the average of the absolute differences between each data point and the chosen central value. AAD provides a measure of the typical magnitude of deviations from the central value in a dataset, giving insights into the overall variability of the data.[5]
Least absolute deviation (LAD) is a statistical method used inregression analysisto estimate the coefficients of a linear model. Unlike the more common least squares method, which minimizes the sum of squared vertical distances (residuals) between the observed and predicted values, the LAD method minimizes the sum of the absolute vertical distances.
In the context of linear regression, if (x1,y1), (x2,y2), ... are the data points, andaandbare the coefficients to be estimated for the linear model
y=b+(a∗x){\displaystyle y=b+(a*x)}
the least absolute deviation estimates (aandb) are obtained by minimizing the sum.
The LAD method is less sensitive to outliers compared to the least squares method, making it a robust regression technique in the presence of skewed or heavy-tailed residual distributions.[6]
For anunbiased estimator, the average of the signed deviations across the entire set of all observations from the unobserved population parameter value averages zero over an arbitrarily large number of samples. However, by construction the average of signed deviations of values from the sample mean value is always zero, though the average signed deviation from another measure of central tendency, such as the sample median, need not be zero.
Mean Signed Deviation is a statistical measure used to assess the average deviation of a set of values from a central point, usually the mean. It is calculated by taking the arithmetic mean of the signed differences between each data point and the mean of the dataset.
The term "signed" indicates that the deviations are considered with their respective signs, meaning whether they are above or below the mean. Positive deviations (above the mean) and negative deviations (below the mean) are included in the calculation. The mean signed deviation provides a measure of the average distance and direction of data points from the mean, offering insights into the overall trend and distribution of the data.[3]
Statistics of the distribution of deviations are used as measures ofstatistical dispersion.
Deviations, which measure the difference between observed values and some reference point, inherently carry units corresponding to the measurement scale used. For example, if lengths are being measured, deviations would be expressed in units like meters or feet. To make deviations unitless and facilitate comparisons across different datasets, one cannondimensionalize.
One common method involves dividing deviations by a measure of scale(statistical dispersion), with the population standard deviation used for standardizing or the sample standard deviation forstudentizing(e.g.,Studentized residual).
Another approach to nondimensionalization focuses on scaling by location rather than dispersion. The percent deviation offers an illustration of this method, calculated as the difference between the observed value and the accepted value, divided by the accepted value, and then multiplied by 100%. By scaling the deviation based on the accepted value, this technique allows for expressing deviations in percentage terms, providing a clear perspective on the relative difference between the observed and accepted values. Both methods of nondimensionalization serve the purpose of making deviations comparable and interpretable beyond the specific measurement units.[10]
In one example, a series of measurements of the speed are taken of sound in a particular medium. The accepted or expected value for the speed of sound in this medium, based on theoretical calculations, is 343 meters per second.
Now, during an experiment, multiple measurements are taken by different researchers. Researcher A measures the speed of sound as 340 meters per second, resulting in a deviation of −3 meters per second from the expected value. Researcher B, on the other hand, measures the speed as 345 meters per second, resulting in a deviation of +2 meters per second.
In this scientific context, deviation helps quantify how individual measurements differ from the theoretically predicted or accepted value. It provides insights into theaccuracy and precisionof experimental results, allowing researchers to assess the reliability of their data and potentially identify factors contributing to discrepancies.
In another example, suppose a chemical reaction is expected to yield 100 grams of a specific compound based on stoichiometry. However, in an actual laboratory experiment, several trials are conducted with different conditions.
In Trial 1, the actual yield is measured to be 95 grams, resulting in a deviation of −5 grams from the expected yield. In Trial 2, the actual yield is measured to be 102 grams, resulting in a deviation of +2 grams. These deviations from the expected value provide valuable information about the efficiency and reproducibility of the chemical reaction under different conditions.
Scientists can analyze these deviations to optimize reaction conditions, identify potential sources of error, and improve the overall yield and reliability of the process. The concept of deviation is crucial in assessing the accuracy of experimental results and making informed decisions to enhance the outcomes of scientific experiments.
|
https://en.wikipedia.org/wiki/Deviation_(statistics)
|
Instatistics,probable errordefines thehalf-rangeof an interval about acentral pointfor the distribution, such that half of the values from the distribution will lie within the interval and half outside.[1]Thus for asymmetric distributionit is equivalent to half theinterquartile range, or themedian absolute deviation. One such use of the termprobable errorin this sense is as the name for thescale parameterof theCauchy distribution, which does not have a standard deviation.
The probable error can also be expressed as a multiple of the standard deviation σ,[1][2]which requires that at least the secondstatistical momentof the distribution should exist, whereas the other definition does not. For anormal distributionthis isγ=0.6745×σ{\displaystyle \gamma =0.6745\times \sigma }(seedetails)
|
https://en.wikipedia.org/wiki/Probable_error
|
Themean absolute difference(univariate) is ameasure of statistical dispersionequal to the averageabsolute differenceof two independent values drawn from aprobability distribution. A related statistic is therelative mean absolute difference, which is the mean absolute difference divided by thearithmetic mean, and equal to twice theGini coefficient.
The mean absolute difference is also known as theabsolute mean difference(not to be confused with theabsolute valueof themean signed difference) and theGinimean difference(GMD).[1]The mean absolute difference is sometimes denoted by Δ or as MD.
The mean absolute difference is defined as the "average" or "mean", formally theexpected value, of the absolute difference of tworandom variablesXandYindependently and identically distributedwith the same (unknown) distribution henceforth calledQ.
Specifically, in the discrete case,
In the continuous case,
An alternative form of the equation is given by:
When the probability distribution has a finite and nonzeroarithmetic meanAM, the relative mean absolute difference, sometimes denoted by Δ or RMD, is defined by
The relative mean absolute difference quantifies the mean absolute difference in comparison to the size of the mean and is a dimensionless quantity. The relative mean absolute difference is equal to twice theGini coefficientwhich is defined in terms of theLorenz curve. This relationship gives complementary perspectives to both the relative mean absolute difference and the Gini coefficient, including alternative ways of calculating their values.
The mean absolute difference is invariant to translations and negation, and varies proportionally to positive scaling. That is to say, ifXis a random variable andcis a constant:
The relative mean absolute difference is invariant to positive scaling, commutes with negation, and varies under translation in proportion to the ratio of the original and translated arithmetic means. That is to say, ifXis a random variable and c is a constant:
If a random variable has a positive mean, then its relative mean absolute difference will always be greater than or equal to zero. If, additionally, the random variable can only take on values that are greater than or equal to zero, then its relative mean absolute difference will be less than 2.
The mean absolute difference is twice theL-scale(the secondL-moment), while the standard deviation is the square root of the variance about the mean (the second conventional central moment). The differences between L-moments and conventional moments are first seen in comparing the mean absolute difference and the standard deviation (the first L-moment and first conventional moment are both the mean).
Both thestandard deviationand the mean absolute difference measure dispersion—how spread out are the values of a population or the probabilities of a distribution. The mean absolute difference is not defined in terms of a specific measure of central tendency, whereas the standard deviation is defined in terms of the deviation from the arithmetic mean. Because the standard deviation squares its differences, it tends to give more weight to larger differences and less weight to smaller differences compared to the mean absolute difference. When the arithmetic mean is finite, the mean absolute difference will also be finite, even when the standard deviation is infinite. See theexamplesfor some specific comparisons.
The recently introduceddistance standard deviationplays similar role to the mean absolute difference but the distance standard deviation works with centered distances. See alsoE-statistics.
For a random sampleSfrom a random variableX, consisting ofnvaluesyi, the statistic
is aconsistentandunbiasedestimatorof MD(X). The statistic:
is aconsistentestimatorof RMD(X), but is not, in general,unbiased.
Confidence intervals for RMD(X) can be calculated using bootstrap sampling techniques.
There does not exist, in general, an unbiased estimator for RMD(X), in part because of the difficulty of finding an unbiased estimation for multiplying by the inverse of the mean. For example, even where the sample is known to be taken from a random variableX(p) for an unknownp, andX(p) − 1has theBernoulli distribution, so thatPr(X(p) = 1) = 1 −pandPr(X(p) = 2) =p, then
But the expected value of any estimatorR(S) of RMD(X(p)) will be of the form:[citation needed]
where theriare constants. So E(R(S)) can never equal RMD(X(p)) for allpbetween 0 and 1.
|
https://en.wikipedia.org/wiki/Mean_absolute_difference
|
Empiricalmethods
Prescriptiveand policy
Convexityis a geometric property with a variety of applications ineconomics.[1]Informally, an economic phenomenon is convex when "intermediates (or combinations) are better than extremes". For example, an economic agent withconvex preferencespreferscombinationsof goods over having a lot of anyonesort of good; this represents a kind ofdiminishing marginal utilityof having more of the same good.
Convexity is a key simplifying assumption in many economic models, as it leads to market behavior that is easy to understand and which has desirable properties. For example, theArrow–Debreu modelofgeneral economic equilibriumposits that if preferences are convex and there is perfect competition, thenaggregate supplieswill equalaggregate demandsfor every commodity in the economy.
In contrast,non-convexityis associated withmarket failures, wheresupply and demanddiffer or wheremarket equilibriacan beinefficient.
The branch of mathematics which supplies the tools for convex functions and their properties is calledconvex analysis; non-convex phenomena are studied undernonsmooth analysis.
The economics depends upon the following definitions and results fromconvex geometry.
Arealvector spaceof twodimensionsmay be given aCartesian coordinate systemin which every point is identified by a list of two real numbers, called "coordinates", which are conventionally denoted byxandy. Two points in the Cartesian plane can beaddedcoordinate-wise
further, a point can bemultipliedby each real numberλcoordinate-wise
More generally, any real vector space of (finite) dimensionDcan be viewed as thesetof all possible lists ofDreal numbers{ (v1,v2, . . . ,vD)} together with twooperations:vector additionandmultiplication by a real number. For finite-dimensional vector spaces, the operations of vector addition and real-number multiplication can each be defined coordinate-wise, following the example of the Cartesian plane.
In a real vector space, a set is defined to beconvexif, for each pair of its points, every point on theline segmentthat joins them iscoveredby the set. For example, a solidcubeis convex; however, anything that is hollow or dented, for example, acrescentshape, is non‑convex.Trivially, theempty setis convex.
More formally, a setQis convex if, for all pointsv0andv1inQand for every real numberλin theunit interval[0,1], the point
is amemberofQ.
Bymathematical induction, a setQis convex if and only if everyconvex combinationof members ofQalso belongs toQ. By definition, aconvex combinationof an indexed subset {v0,v1, . . . ,vD} of a vector space is anyweighted averageλ0v0+λ1v1+ . . . +λDvD,for some indexed set of non‑negative real numbers {λd} satisfying theequationλ0+λ1+ . . . +λD= 1.
The definition of a convex set implies that theintersectionof two convex sets is a convex set. More generally, the intersection of a family of convex sets is a convex set.
For every subsetQof a real vector space, itsconvex hullConv(Q)is theminimalconvex set that containsQ. Thus Conv(Q) is the intersection of all the convex sets thatcoverQ. The convex hull of a set can be equivalently defined to be the set of all convex combinations of points inQ.
Supporting hyperplaneis a concept ingeometry. Ahyperplanedivides a space into twohalf-spaces. A hyperplane is said tosupportasetS{\displaystyle S}in therealn-spaceRn{\displaystyle \mathbb {R} ^{n}}if it meets both of the following:
Here, a closed half-space is the half-space that includes the hyperplane.
Thistheoremstates that ifS{\displaystyle S}is a closedconvex setinRn,{\displaystyle \mathbb {R} ^{n},}andx{\displaystyle x}is a point on theboundaryofS,{\displaystyle S,}then there exists a supporting hyperplane containingx.{\displaystyle x.}
The hyperplane in the theorem may not be unique, as noticed in the second picture on the right. If the closed setS{\displaystyle S}is not convex, the statement of the theorem is not true at all points on the boundary ofS,{\displaystyle S,}as illustrated in the third picture on the right.
An optimal basket of goods occurs where the consumer's convexpreference setissupportedby the budget constraint, as shown in the diagram. If the preference set is convex, then the consumer's set of optimal decisions is a convex set, for example, a unique optimal basket (or even a line segment of optimal baskets).
For simplicity, we shall assume that the preferences of a consumer can be described by autility functionthat is acontinuous function, which implies that thepreference setsareclosed. (The meanings of "closed set" is explained below, in the subsection on optimization applications.)
If a preference set is non‑convex, then some prices produce a budget supporting two different optimal consumption decisions. For example, we can imagine that, for zoos, a lion costs as much as an eagle, and further that a zoo's budget suffices for one eagle or one lion. We can suppose also that a zoo-keeper views either animal as equally valuable. In this case, the zoo would purchase either one lion or one eagle. Of course, a contemporary zoo-keeper does not want to purchase a half an eagle and ahalf a lion(or agriffin)! Thus, the contemporary zoo-keeper's preferences are non‑convex: The zoo-keeper prefers having either animal to having any strictly convex combination of both.
Non‑convex sets have been incorporated in the theories of general economic equilibria,[2]ofmarket failures,[3]and ofpublic economics.[4]These results are described in graduate-level textbooks inmicroeconomics,[5]general equilibrium theory,[6]game theory,[7]mathematical economics,[8]and applied mathematics (for economists).[9]TheShapley–Folkman lemmaresults establish that non‑convexities are compatible with approximate equilibria in markets with many consumers; these results also apply toproduction economieswith many smallfirms.[10]
In "oligopolies" (markets dominated by a few producers), especially in "monopolies" (markets dominated by one producer), non‑convexities remain important.[11]Concerns with large producers exploiting market power in fact initiated the literature on non‑convex sets, whenPiero Sraffawrote about on firms with increasingreturns to scalein 1926,[12]after whichHarold Hotellingwrote aboutmarginal cost pricingin 1938.[13]Both Sraffa and Hotelling illuminated themarket powerof producers without competitors, clearly stimulating a literature on the supply-side of the economy.[14]Non‑convex sets arise also withenvironmental goods(and otherexternalities),[15][16]withinformation economics,[17]and withstock markets[11](and otherincomplete markets).[18][19]Such applications continued to motivate economists to study non‑convex sets.[20]
Economists have increasingly studied non‑convex sets withnonsmooth analysis, which generalizesconvex analysis. "Non‑convexities in [both] production and consumption ... required mathematical tools that went beyond convexity, and further development had to await the invention of non‑smooth calculus" (for example, Francis Clarke'slocally Lipschitzcalculus), as described byRockafellar & Wets (1998)[21]andMordukhovich (2006),[22]according toKhan (2008).[23]Brown (1991, pp. 1967–1968) wrote that the "major methodological innovation in the general equilibrium analysis of firms with pricing rules" was "the introduction of the methods of non‑smooth analysis, as a [synthesis] of global analysis (differential topology) and [of] convex analysis." According toBrown (1991, p. 1966), "Non‑smooth analysis extends the local approximation of manifolds by tangent planes [and extends] the analogous approximation of convex sets by tangent cones to sets" that can be non‑smooth or non‑convex.[24]Economists have also usedalgebraic topology.[25]
|
https://en.wikipedia.org/wiki/Convexity_in_economics
|
Ineconomics,non-convexityrefers to violations of theconvexity assumptions of elementary economics. Basic economics textbooks concentrate on consumers withconvex preferences(that do not prefer extremes to in-between values) and convexbudget setsand on producers with convexproduction sets; for convex models, the predicted economic behavior is well understood.[1][2]When convexity assumptions are violated, then many of the good properties of competitive markets need not hold: Thus, non-convexity is associated withmarket failures,[3][4]wheresupply and demanddiffer or wheremarket equilibriacan beinefficient.[1][4][5][6][7][8]Non-convex economies are studied withnonsmooth analysis, which is a generalization ofconvex analysis.[8][9][10][11]
If a preference set isnon-convex, then some prices determine a budget-line that supports twoseparateoptimal-baskets. For example, we can imagine that, for zoos, a lion costs as much as an eagle, and further that a zoo's budget suffices for one eagle or one lion. We can suppose also that a zoo-keeper views either animal as equally valuable. In this case, the zoo would purchase either one lion or one eagle. Of course, a contemporary zoo-keeper does not want to purchase half of an eagle and half of a lion. Thus, the zoo-keeper's preferences are non-convex: The zoo-keeper prefers having either animal to having any strictly convex combination of both.
When the consumer's preference set is non-convex, then (for some prices) the consumer's demand is notconnected; A disconnected demand implies some discontinuous behavior by the consumer, as discussed byHarold Hotelling:
If indifference curves for purchases be thought of as possessing a wavy character, convex to the origin in some regions and concave in others, we are forced to the conclusion that it is only the portions convex to the origin that can be regarded as possessing any importance, since the others are essentially unobservable. They can be detected only by the discontinuities that may occur in demand with variation in price-ratios, leading to an abrupt jumping of a point of tangency across a chasm when the straight line is rotated. But, while such discontinuities may reveal the existence of chasms, they can never measure their depth. The concave portions of the indifference curves and their many-dimensional generalizations, if they exist, must forever remain in
unmeasurable obscurity.[12]
The difficulties of studying non-convex preferences were emphasized byHerman Wold[13]and again byPaul Samuelson, who wrote that non-convexities are "shrouded in eternaldarkness ...",[14]according to Diewert.[15]
When convexity assumptions are violated, then many of the good properties of competitive markets need not hold: Thus, non-convexity is associated withmarket failures, wheresupply and demanddiffer or wheremarket equilibriacan beinefficient.[1]Non-convex preferences were illuminated from 1959 to 1961 by a sequence of papers inThe Journal of Political Economy(JPE). The main contributors wereMichael Farrell,[16]Francis Bator,[17]Tjalling Koopmans,[18]and Jerome Rothenberg.[19]In particular, Rothenberg's paper discussed the approximate convexity of sums of non-convex sets.[20]TheseJPE-papers stimulated a paper byLloyd ShapleyandMartin Shubik, which considered convexified consumer-preferences and introduced the concept of an "approximate equilibrium".[21]TheJPE-papers and the Shapley–Shubik paper influenced another notion of "quasi-equilibria", due toRobert Aumann.[22][23]
Non-convex sets have been incorporated in the theories of general economic equilibria.[24]These results are described in graduate-level textbooks inmicroeconomics,[25]general equilibrium theory,[26]game theory,[27]mathematical economics,[28]and applied mathematics (for economists).[29]TheShapley–Folkman lemmaestablishes that non-convexities are compatible with approximate equilibria in markets with many consumers; these results also apply toproduction economieswith many smallfirms.[30]
Non-convexity is important underoligopoliesand especiallymonopolies.[8]Concerns with large producers exploiting market power initiated the literature on non-convex sets, whenPiero Sraffawrote about on firms with increasingreturns to scalein 1926,[31]after whichHarold Hotellingwrote aboutmarginal costpricing in 1938.[32]Both Sraffa and Hotelling illuminated themarket powerof producers without competitors, clearly stimulating a literature on the supply-side of the economy.[33]
Recent research in economics has recognized non-convexity in new areas of economics. In these areas, non-convexity is associated withmarket failures, whereequilibrianeed not beefficientor where no competitive equilibrium exists becausesupply and demanddiffer.[1][4][5][6][7][8]Non-convex sets arise also withenvironmental goods(and otherexternalities),[6][7]and with market failures,[3]andpublic economics.[5][34]Non-convexities occur also withinformation economics,[35]and withstock markets[8](and otherincomplete markets).[36][37]Such applications continued to motivate economists to study non-convex sets.[1]In some cases, non-linear pricing or bargaining may overcome the failures of markets with competitive pricing; in other cases, regulation may be justified.
The previously mentioned applications concern non-convexities in finite-dimensionalvector spaces, where points represent commodity bundles. However, economists also consider dynamic problems of optimization over time, using the theories ofdifferential equations,dynamic systems,stochastic processes, andfunctional analysis: Economists use the following optimization methods:
In these theories, regular problems involve convex functions defined on convex domains, and this convexity allows simplifications of techniques and economic meaningful interpretations of the results.[43][44][45]In economics, dynamic programing was used by Martin Beckmann and Richard F. Muth for work oninventory theoryandconsumption theory.[46]Robert C. Merton used dynamic programming in his 1973 article on theintertemporal capital asset pricing model.[47](See alsoMerton's portfolio problem). In Merton's model, investors chose between income today and future income or capital gains, and their solution is found via dynamic programming. Stokey, Lucas & Prescott use dynamic programming to solve problems in economic theory, problems involving stochastic processes.[48]Dynamic programming has been used in optimaleconomic growth,resource extraction,principal–agent problems,public finance, businessinvestment,asset pricing,factorsupply, andindustrial organization. Ljungqvist & Sargent apply dynamic programming to study a variety of theoretical questions inmonetary policy,fiscal policy,taxation, economic growth,search theory, andlabor economics.[49]Dixit & Pindyck used dynamic programming forcapital budgeting.[50]For dynamic problems, non-convexities also are associated with market failures,[51]just as they are for fixed-time problems.[52]
Economists have increasingly studied non-convex sets withnonsmooth analysis, which generalizesconvex analysis. Convex analysis centers on convex sets and convex functions, for which it provides powerful ideas and clear results, but it is not adequate for the analysis of non-convexities, such as increasing returns to scale.[53]"Non-convexities in [both] production and consumption ... required mathematical tools that went beyond convexity, and further development had to await the invention of non-smooth calculus": For example,Clarke'sdifferential calculusforLipschitz continuous functions, which usesRademacher's theoremand which is described byRockafellar & Wets (1998)[54]andMordukhovich (2006),[9]according toKhan (2008).[10]Brown (1995, pp. 1967–1968)harvtxt error: no target: CITEREFBrown1995 (help)wrote that the "major methodological innovation in the general equilibrium analysis of firms with pricing rules" was "the introduction of the methods of non-smooth analysis, as a [synthesis] of global analysis (differential topology) and [of] convex analysis." According toBrown (1995, p. 1966)harvtxt error: no target: CITEREFBrown1995 (help), "Non-smooth analysis extends the local approximation of manifolds by tangent planes [and extends] the analogous approximation of convex sets by tangent cones to sets" that can be non-smooth or non-convex.[11][55]
Exercise 45, page 146:Wold, Herman; Juréen, Lars (in association with Wold) (1953). "8 Some further applications of preference fields (pp. 129–148)".Demand analysis: A study in econometrics. Wiley publications in statistics. New York: John Wiley and Sons, Inc. Stockholm: Almqvist and Wiksell.MR0064385.
It will be noted that any point where the indifference curves are convex rather than concave cannot be observed in a competitive market. Such points are shrouded in eternal darkness—unless we make our consumer a monopsonist and let him choose between goods lying on a very convex "budget curve" (along which he is affecting the price of what he buys). In this monopsony case, we could still deduce the slope of the man's indifference curve from the slope of the observed constraint at the equilibrium point.
A gulf profound as that Serbonian Bog
Betwixt Damiata and Mount Casius old,
Where Armies whole have sunk.
Koopmans (1961, p. 478) and others—for example,Farrell (1959, pp. 390–391) andFarrell (1961a, p. 484),Bator (1961a, pp. 482–483),Rothenberg (1960, p. 438), andStarr (1969, p. 26)harvtxt error: no target: CITEREFStarr1969 (help)—commented onKoopmans (1957, pp. 1–126, especially 9–16 [1.3 Summation of opportunity sets], 23–35 [1.6 Convex sets and the price implications of optimality], and 35–37 [1.7 The role of convexity assumptions in the analysis]):
Koopmans, Tjalling C.(1957). "Allocation of resources and the price system". InKoopmans, Tjalling C(ed.).Three essays on the state of economic science. New York: McGraw–Hill Book Company. pp.1–126.ISBN0-07-035337-9.{{cite book}}:ISBN / Date incompatibility (help)
Aumann, Robert J.(January–April 1964). "Markets with a continuum of traders".Econometrica.32(1–2):39–50.doi:10.2307/1913732.JSTOR1913732.MR0172689.
Aumann, Robert J.(August 1965)."Integrals of set-valued functions".Journal of Mathematical Analysis and Applications.12(1):1–12.doi:10.1016/0022-247X(65)90049-1.MR0185073.
Pages 52–55 with applications on pages 145–146, 152–153, and 274–275:Mas-Colell, Andreu(1985). "1.L Averages of sets".The Theory of General Economic Equilibrium: ADifferentiableApproach. Econometric Society Monographs. Cambridge University Press.ISBN0-521-26514-2.MR1113262.
Theorem C(6) on page 37 and applications on pages 115-116, 122, and 168:Hildenbrand, Werner(1974).Core and equilibria of a large economy. Princeton studies in mathematical economics. Princeton, NJ: Princeton University Press.ISBN978-0-691-04189-6.MR0389160.
Page 628:Mas–Colell, Andreu; Whinston, Michael D.; Green, Jerry R. (1995). "17.1 Large economies and nonconvexities".Microeconomic theory. Oxford University Press. pp.627–630.ISBN978-0-19-507340-9.
Ellickson (1994, p. xviii), and especially Chapter 7 "Walras meets Nash" (especially section 7.4 "Nonconvexity" pages 306–310 and 312, and also 328–329) and Chapter 8 "What is Competition?" (pages 347 and 352):Ellickson, Bryan (1994).Competitive equilibrium: Theory and applications. Cambridge University Press.ISBN978-0-521-31988-1.
Page 309:Moore, James C. (1999).Mathematical methods for economic theory: VolumeI. Studies in economic theory. Vol. 9. Berlin: Springer-Verlag.doi:10.1007/978-3-662-08544-8.ISBN3-540-66235-9.MR1727000.
Pages 47–48:Florenzano, Monique; Le Van, Cuong (2001).Finite dimensional convexity and optimization. Studies in economic theory. Vol. 13. in cooperation with Pascal Gourdel. Berlin: Springer-Verlag.doi:10.1007/978-3-642-56522-9.ISBN3-540-41516-5.MR1878374.S2CID117240618.
Heal, G. M. (April 1998).The Economics of Increasing Returns(PDF). PaineWebber working paper series in money, economics, and finance. Columbia Business School. PW-97-20. Archived fromthe original(PDF)on 15 September 2015. Retrieved5 March2011.
|
https://en.wikipedia.org/wiki/Non-convexity_(economics)
|
This is alist of convexity topics, by Wikipedia page.
|
https://en.wikipedia.org/wiki/List_of_convexity_topics
|
Moritz Werner Fenchel(German:[ˈfɛnçəl]; 3 May 1905 – 24 January 1988) was a German-Danishmathematicianknown for his contributions togeometryand tooptimization theory. Fenchel established the basic results ofconvex analysisand nonlinear optimization theory which would, in time, serve as the foundation fornonlinear programming. A German-born Jew and early refugee from Nazi suppression of intellectuals, Fenchel lived most of his life in Denmark. Fenchel's monographs and lecture notes are considered influential.
Fenchel was born on 3 May 1905 inBerlin, Germany,[1]his younger brother was the Israeli film director and architectHeinz Fenchel.
Fenchel studied mathematics and physics at theUniversity of Berlinbetween 1923 and 1928.[1]He wrote his doctorate thesis ingeometry(Über Krümmung und Windung geschlossener Raumkurven)[2]underLudwig Bieberbach.[1]
From 1928 to 1933, Fenchel was ProfessorE. Landau's Assistant at theUniversity of Göttingen. During a one-year leave (onRockefeller Fellowship) between 1930 and 1931, Fenchel spent time in Rome withTullio Levi-Civita, as well as inCopenhagenwithHarald BohrandTommy Bonnesen.
He visited Denmark again in 1932.[1]
Fenchel taught at Göttingen until 1933, when theNazi discrimination lawsled tomass-firings of Jews.[3]
Fenchel emigrated to Denmark somewhere between April and September 1933, ultimately obtaining a position at theUniversity of Copenhagen. In December 1933, Fenchel married fellow German refugee mathematicianKäte Sperling.[1]
WhenGermany occupied Denmark, Fenchel and roughly eight-thousand other Danish Jewsreceived refugein Sweden, where he taught (between 1943 and 1945) at the Danish School inLund.[1]After the Allied powers'liberation of Denmark, Fenchel returned to Copenhagen.
In 1946, Fenchel was elected a member of theRoyal Danish Academy of Sciences and Letters.[1]
On leave between 1949 and 1951, Fenchel taught in the U.S. at theUniversity of Southern California,Stanford University, andPrinceton University.[1]
From 1952 to 1956 Fenchel was the professor in mechanics at the Polytechnic in Copenhagen.[1]
From 1956 to 1974 he was the professor in mathematics at theUniversity of Copenhagen.[1]
Professor Fenchel died on 24 January 1988.[1]
Fenchel lectured on "Convex Sets, Cones, and Functions" at Princeton University in the early 1950s. His lecture notes shaped the field ofconvex analysis, according to the monographConvex AnalysisofR. T. Rockafellar.
|
https://en.wikipedia.org/wiki/Werner_Fenchel
|
Inmathematical optimizationtheory,dualityor theduality principleis the principle thatoptimization problemsmay be viewed from either of two perspectives, theprimal problemor thedual problem. If the primal is a minimization problem then the dual is a maximization problem (and vice versa). Any feasible solution to the primal (minimization) problem is at least as large as any feasible solution to the dual (maximization) problem. Therefore, the solution to the primal is an upper bound to the solution of the dual, and the solution of the dual is a lower bound to the solution of the primal.[1]This fact is calledweak duality.
In general, the optimal values of the primal and dual problems need not be equal. Their difference is called theduality gap. Forconvex optimizationproblems, the duality gap is zero under aconstraint qualificationcondition. This fact is calledstrong duality.
Usually the term "dual problem" refers to theLagrangian dual problembut other dual problems are used – for example, theWolfe dual problemand theFenchel dual problem. The Lagrangian dual problem is obtained by forming theLagrangianof a minimization problem by using nonnegativeLagrange multipliersto add the constraints to the objective function, and then solving for the primal variable values that minimize the original objective function. This solution gives the primal variables as functions of the Lagrange multipliers, which are called dual variables, so that the new problem is to maximize the objective function with respect to the dual variables under the derived constraints on the dual variables (including at least the nonnegativity constraints).
In general given twodual pairsofseparatedlocally convex spaces(X,X∗){\displaystyle \left(X,X^{*}\right)}and(Y,Y∗){\displaystyle \left(Y,Y^{*}\right)}and the functionf:X→R∪{+∞}{\displaystyle f:X\to \mathbb {R} \cup \{+\infty \}}, we can define the primal problem as findingx^{\displaystyle {\hat {x}}}such thatf(x^)=infx∈Xf(x).{\displaystyle f({\hat {x}})=\inf _{x\in X}f(x).\,}In other words, ifx^{\displaystyle {\hat {x}}}exists,f(x^){\displaystyle f({\hat {x}})}is theminimumof the functionf{\displaystyle f}and theinfimum(greatest lower bound) of the function is attained.
If there are constraint conditions, these can be built into the functionf{\displaystyle f}by lettingf~=f+Iconstraints{\displaystyle {\tilde {f}}=f+I_{\mathrm {constraints} }}whereIconstraints{\displaystyle I_{\mathrm {constraints} }}is a suitable function onX{\displaystyle X}that has a minimum 0 on the constraints, and for which one can prove thatinfx∈Xf~(x)=infxconstrainedf(x){\displaystyle \inf _{x\in X}{\tilde {f}}(x)=\inf _{x\ \mathrm {constrained} }f(x)}. The latter condition is trivially, but not always conveniently, satisfied for thecharacteristic function(i.e.Iconstraints(x)=0{\displaystyle I_{\mathrm {constraints} }(x)=0}forx{\displaystyle x}satisfying the constraints andIconstraints(x)=∞{\displaystyle I_{\mathrm {constraints} }(x)=\infty }otherwise). Then extendf~{\displaystyle {\tilde {f}}}to aperturbation functionF:X×Y→R∪{+∞}{\displaystyle F:X\times Y\to \mathbb {R} \cup \{+\infty \}}such thatF(x,0)=f~(x){\displaystyle F(x,0)={\tilde {f}}(x)}.[2]
Theduality gapis the difference of the right and left hand sides of the inequality
whereF∗{\displaystyle F^{*}}is theconvex conjugatein both variables andsup{\displaystyle \sup }denotes thesupremum(least upper bound).[2][3][4]
The duality gap is the difference between the values of any primal solutions and any dual solutions. Ifd∗{\displaystyle d^{*}}is the optimal dual value andp∗{\displaystyle p^{*}}is the optimal primal value, then the duality gap is equal top∗−d∗{\displaystyle p^{*}-d^{*}}. This value is always greater than or equal to 0 (for minimization problems). The duality gap is zero if and only ifstrong dualityholds. Otherwise the gap is strictly positive andweak dualityholds.[5]
In computational optimization, another "duality gap" is often reported, which is the difference in value between any dual solution and the value of a feasible but suboptimal iterate for the primal problem. This alternative "duality gap" quantifies the discrepancy between the value of a current feasible but suboptimal iterate for the primal problem and the value of the dual problem; the value of the dual problem is, under regularity conditions, equal to the value of theconvex relaxationof the primal problem: The convex relaxation is the problem arising replacing a non-convex feasible set with its closedconvex hulland with replacing a non-convex function with its convexclosure, that is the function that has theepigraphthat is the closed convex hull of the original primal objective function.[6][7][8][9][10][11][12][13][14][15][16]
Linear programmingproblems areoptimizationproblems in which theobjective functionand theconstraintsare alllinear. In the primal problem, the objective function is alinear combinationofnvariables. There aremconstraints, each of which places an upper bound on a linear combination of thenvariables. The goal is to maximize the value of the objective function subject to the constraints. Asolutionis avector(a list) ofnvalues that achieves the maximum value for the objective function.
In the dual problem, the objective function is a linear combination of themvalues that are the limits in themconstraints from the primal problem. There arendual constraints, each of which places a lower bound on a linear combination ofmdual variables.
In the linear case, in the primal problem, from each sub-optimal point that satisfies all the constraints, there is a direction orsubspaceof directions to move that increases the objective function. Moving in any such direction is said to remove slack between thecandidate solutionand one or more constraints. Aninfeasiblevalue of the candidate solution is one that exceeds one or more of the constraints.
In the dual problem, the dual vector multiplies the constraints that determine the positions of the constraints in the primal. Varying the dual vector in the dual problem is equivalent to revising the upper bounds in the primal problem. The lowest upper bound is sought. That is, the dual vector is minimized in order to remove slack between the candidate positions of the constraints and the actual optimum. An infeasible value of the dual vector is one that is too low. It sets the candidate positions of one or more of the constraints in a position that excludes the actual optimum.
This intuition is made formal by the equations inLinear programming: Duality.
Innonlinear programming, the constraints are not necessarily linear. Nonetheless, many of the same principles apply.
To ensure that the global maximum of a non-linear problem can be identified easily, the problem formulation often requires that the functions be convex and have compact lower level sets. This is the significance of theKarush–Kuhn–Tucker conditions. They provide necessary conditions for identifying local optima of non-linear programming problems. There are additional conditions (constraint qualifications) that are necessary so that it will be possible to define the direction to anoptimalsolution. An optimal solution is one that is alocal optimum, but possibly not a global optimum.
Motivation[17]
Suppose we want to solve the followingnonlinear programmingproblem:
minimizef0(x)subject tofi(x)≤0,i∈{1,…,m}{\displaystyle {\begin{aligned}{\text{minimize }}&f_{0}(x)\\{\text{subject to }}&f_{i}(x)\leq 0,\ i\in \left\{1,\ldots ,m\right\}\\\end{aligned}}}
The problem has constraints; we would like to convert it to a program without constraints. Theoretically, it is possible to do it by minimizing the functionJ(x){\displaystyle J(x)}, defined as
J(x)=f0(x)+∑iI[fi(x)]{\displaystyle J(x)=f_{0}(x)+\sum _{i}I[f_{i}(x)]}
whereI{\displaystyle I}is an infinitestep function:I[u]=0{\displaystyle I[u]=0}ifu≤0{\displaystyle u\leq 0}, andI[u]=∞{\displaystyle I[u]=\infty }otherwise. ButJ(x){\displaystyle J(x)}is hard to solve as it is not continuous. It is possible to "approximate"I[u]{\displaystyle I[u]}byλu{\displaystyle \lambda u}, whereλ{\displaystyle \lambda }is a positive constant. This yields a function known as the Lagrangian:
L(x,λ)=f0(x)+∑iλifi(x){\displaystyle L(x,\lambda )=f_{0}(x)+\sum _{i}\lambda _{i}f_{i}(x)}
Note that, for everyx{\displaystyle x},
maxλ≥0L(x,λ)=J(x){\displaystyle \max _{\lambda \geq 0}L(x,\lambda )=J(x)}.
Proof:
Therefore, the original problem is equivalent to:
minxmaxλ≥0L(x,λ){\displaystyle \min _{x}\max _{\lambda \geq 0}L(x,\lambda )}.
By reversing the order of min and max, we get:
maxλ≥0minxL(x,λ){\displaystyle \max _{\lambda \geq 0}\min _{x}L(x,\lambda )}.
Thedual functionis the inner problem in the above formula:
g(λ):=minxL(x,λ){\displaystyle g(\lambda ):=\min _{x}L(x,\lambda )}.
TheLagrangian dual programis the program of maximizing g:
maxλ≥0g(λ){\displaystyle \max _{\lambda \geq 0}g(\lambda )}.
The optimal solution to the dual program is a lower bound for the optimal solution of the original (primal) program; this is theweak dualityprinciple.
If the primal problem is convex and bounded from below, and there exists a point in which all nonlinear constraints are strictly satisfied (Slater's condition), then the optimal solution to the dual programequalsthe optimal solution of the primal program; this is thestrong dualityprinciple. In this case, we can solve the primal program by finding an optimal solutionλ* to the dual program, and then solving:
minxL(x,λ∗){\displaystyle \min _{x}L(x,\lambda ^{*})}.
Note that, to use either the weak or the strong duality principle, we need a way to compute g(λ). In general this may be hard, as we need to solve a different minimization problem for everyλ. But for some classes of functions, it is possible to get an explicit formula for g(). Solving the primal and dual programs together is often easier than solving only one of them. Examples arelinear programmingandquadratic programming. A better and more general approach to duality is provided byFenchel's duality theorem.[18]: Sub.3.3.1
Another condition in which the min-max and max-min are equal is when the Lagrangian has asaddle point: (x∗, λ∗) is a saddle point of the Lagrange function L if and only if x∗ is an optimal solution to the primal, λ∗ is an optimal solution to the dual, and the optimal values in the indicated problems are equal to each other.[18]: Prop.3.2.2
Given anonlinear programmingproblem in standard form
with the domainD⊂Rn{\displaystyle {\mathcal {D}}\subset \mathbb {R} ^{n}}having non-empty interior, theLagrangian functionL:Rn×Rm×Rp→R{\displaystyle {\mathcal {L}}:\mathbb {R} ^{n}\times \mathbb {R} ^{m}\times \mathbb {R} ^{p}\to \mathbb {R} }is defined as
The vectorsλ{\displaystyle \lambda }andν{\displaystyle \nu }are called thedual variablesorLagrange multiplier vectorsassociated with the problem. TheLagrange dual functiong:Rm×Rp→R{\displaystyle g:\mathbb {R} ^{m}\times \mathbb {R} ^{p}\to \mathbb {R} }is defined as
The dual functiongis concave, even when the initial problem is not convex, because it is a point-wise infimum of affine functions. The dual function yields lower bounds on the optimal valuep∗{\displaystyle p^{*}}of the initial problem; for anyλ≥0{\displaystyle \lambda \geq 0}and anyν{\displaystyle \nu }we haveg(λ,ν)≤p∗{\displaystyle g(\lambda ,\nu )\leq p^{*}}.
If aconstraint qualificationsuch asSlater's conditionholds and the original problem is convex, then we havestrong duality, i.e.d∗=maxλ≥0,νg(λ,ν)=inff0=p∗{\displaystyle d^{*}=\max _{\lambda \geq 0,\nu }g(\lambda ,\nu )=\inf f_{0}=p^{*}}.
For a convex minimization problem with inequality constraints,
the Lagrangian dual problem is
where the objective function is the Lagrange dual function. Provided that the functionsf{\displaystyle f}andg1,…,gm{\displaystyle g_{1},\ldots ,g_{m}}are continuously differentiable, the infimum occurs where the gradient is equal to zero. The problem
is called theWolfe dual problem. This problem may be difficult to deal with computationally, because the objective function is not concave in the joint variables(u,x){\displaystyle (u,x)}. Also, the equality constraint∇f(x)+∑j=1muj∇gj(x){\displaystyle \nabla f(x)+\sum _{j=1}^{m}u_{j}\,\nabla g_{j}(x)}is nonlinear in general, so the Wolfe dual problem is typically a nonconvex optimization problem. In any case,weak dualityholds.[19]
According toGeorge Dantzig, the duality theorem for linear optimization was conjectured byJohn von Neumannimmediately after Dantzig presented the linear programming problem. Von Neumann noted that he was using information from hisgame theory, and conjectured that two person zero sum matrix game was equivalent to linear programming. Rigorous proofs were first published in 1948 byAlbert W. Tuckerand his group. (Dantzig's foreword to Nering and Tucker, 1993)
Insupport vector machines(SVMs), formulating the primal problem of SVMs as the dual problem can be used to implement theKernel trick, but the latter has higher time complexity in the historical cases.
|
https://en.wikipedia.org/wiki/Dual_problem
|
In mathematics,Fenchel's duality theoremis a result in the theory of convex functions named afterWerner Fenchel.
Letƒbe aproper convex functiononRnand letgbe a proper concave function onRn. Then, if regularity conditions are satisfied,
whereƒ*is theconvex conjugateofƒ(also referred to as the Fenchel–Legendre transform) andg*is theconcave conjugateofg. That is,
LetXandYbeBanach spaces,f:X→R∪{+∞}{\displaystyle f:X\to \mathbb {R} \cup \{+\infty \}}andg:Y→R∪{+∞}{\displaystyle g:Y\to \mathbb {R} \cup \{+\infty \}}be convex functions andA:X→Y{\displaystyle A:X\to Y}be aboundedlinear map. Then the Fenchel problems:
satisfyweak duality, i.e.p∗≥d∗{\displaystyle p^{*}\geq d^{*}}. Note thatf∗,g∗{\displaystyle f^{*},g^{*}}are the convex conjugates off,grespectively, andA∗{\displaystyle A^{*}}is theadjoint operator. Theperturbation functionfor thisdual problemis given byF(x,y)=f(x)+g(Ax−y){\displaystyle F(x,y)=f(x)+g(Ax-y)}.
Suppose thatf,g, andAsatisfy either
Thenstrong dualityholds, i.e.p∗=d∗{\displaystyle p^{*}=d^{*}}. Ifd∗∈R{\displaystyle d^{*}\in \mathbb {R} }thensupremumis attained.[1]
In the following figure, the minimization problem on the left side of the equation is illustrated. One seeks to varyxsuch that the vertical distance between the convex and concave curves atxis as small as possible. The position of the vertical line in the figure is the (approximate) optimum.
The next figure illustrates the maximization problem on the right hand side of the above equation. Tangents are drawn to each of the two curves such that both tangents have the same slopep. The problem is to adjustpin such a way that the two tangents are as far away from each other as possible (more precisely, such that the points where they intersect the y-axis are as far from each other as possible). Imagine the two tangents as metal bars with vertical springs between them that push them apart and against the two parabolas that are fixed in place.
Fenchel's theorem states that the two problems have the same solution. The points having the minimum vertical separation are also the tangency points for the maximally separated parallel tangents.
|
https://en.wikipedia.org/wiki/Fenchel%27s_duality_theorem
|
Inmathematics, theLegendre transformation(orLegendre transform), first introduced byAdrien-Marie Legendrein 1787 when studying the minimal surface problem,[1]is aninvolutivetransformationonreal-valued functions that areconvexon a real variable. Specifically, if a real-valued multivariable function is convex on one of its independent real variables, then the Legendre transform with respect to this variable is applicable to the function.
In physical problems, the Legendre transform is used to convert functions of one quantity (such as position, pressure, or temperature) into functions of theconjugate quantity(momentum, volume, and entropy, respectively). In this way, it is commonly used inclassical mechanicsto derive theHamiltonianformalism out of theLagrangianformalism (or vice versa) and inthermodynamicsto derive thethermodynamic potentials, as well as in the solution ofdifferential equationsof several variables.
For sufficiently smooth functions on the real line, the Legendre transformf∗{\displaystyle f^{*}}of a functionf{\displaystyle f}can be specified, up to an additive constant, by the condition that the functions' first derivatives are inverse functions of each other. This can be expressed inEuler's derivative notationasDf(⋅)=(Df∗)−1(⋅),{\displaystyle Df(\cdot )=\left(Df^{*}\right)^{-1}(\cdot )~,}whereD{\displaystyle D}is an operator of differentiation,⋅{\displaystyle \cdot }represents an argument or input to the associated function,(ϕ)−1(⋅){\displaystyle (\phi )^{-1}(\cdot )}is an inverse function such that(ϕ)−1(ϕ(x))=x{\displaystyle (\phi )^{-1}(\phi (x))=x}, or equivalently, asf′(f∗′(x∗))=x∗{\displaystyle f'(f^{*\prime }(x^{*}))=x^{*}}andf∗′(f′(x))=x{\displaystyle f^{*\prime }(f'(x))=x}inLagrange's notation.
The generalization of the Legendre transformation to affine spaces and non-convex functions is known as theconvex conjugate(also called the Legendre–Fenchel transformation), which can be used to construct a function'sconvex hull.
LetI⊂R{\displaystyle I\subset \mathbb {R} }be aninterval, andf:I→R{\displaystyle f:I\to \mathbb {R} }aconvex function; then theLegendre transformoff{\displaystyle f}is the functionf∗:I∗→R{\displaystyle f^{*}:I^{*}\to \mathbb {R} }defined byf∗(x∗)=supx∈I(x∗x−f(x)),I∗={x∗∈R:supx∈I(x∗x−f(x))<∞}{\displaystyle f^{*}(x^{*})=\sup _{x\in I}(x^{*}x-f(x)),\ \ \ \ I^{*}=\left\{x^{*}\in \mathbb {R} :\sup _{x\in I}(x^{*}x-f(x))<\infty \right\}}wheresup{\textstyle \sup }denotes thesupremumoverI{\displaystyle I}, e.g.,x{\textstyle x}inI{\textstyle I}is chosen such thatx∗x−f(x){\textstyle x^{*}x-f(x)}is maximized at eachx∗{\textstyle x^{*}}, orx∗{\textstyle x^{*}}is such thatx∗x−f(x){\displaystyle x^{*}x-f(x)}has a bounded value throughoutI{\textstyle I}(e.g., whenf(x){\displaystyle f(x)}is a linear function).
The functionf∗{\displaystyle f^{*}}is called theconvex conjugatefunction off{\displaystyle f}. For historical reasons (rooted in analytic mechanics), the conjugate variable is often denotedp{\displaystyle p}, instead ofx∗{\displaystyle x^{*}}. If the convex functionf{\displaystyle f}is defined on the whole line and is everywheredifferentiable, thenf∗(p)=supx∈I(px−f(x))=(px−f(x))|x=(f′)−1(p){\displaystyle f^{*}(p)=\sup _{x\in I}(px-f(x))=\left(px-f(x)\right)|_{x=(f')^{-1}(p)}}can be interpreted as the negative of they{\displaystyle y}-interceptof thetangent lineto thegraphoff{\displaystyle f}that has slopep{\displaystyle p}.
The generalization to convex functionsf:X→R{\displaystyle f:X\to \mathbb {R} }on aconvex setX⊂Rn{\displaystyle X\subset \mathbb {R} ^{n}}is straightforward:f∗:X∗→R{\displaystyle f^{*}:X^{*}\to \mathbb {R} }has domainX∗={x∗∈Rn:supx∈X(⟨x∗,x⟩−f(x))<∞}{\displaystyle X^{*}=\left\{x^{*}\in \mathbb {R} ^{n}:\sup _{x\in X}(\langle x^{*},x\rangle -f(x))<\infty \right\}}and is defined byf∗(x∗)=supx∈X(⟨x∗,x⟩−f(x)),x∗∈X∗,{\displaystyle f^{*}(x^{*})=\sup _{x\in X}(\langle x^{*},x\rangle -f(x)),\quad x^{*}\in X^{*}~,}where⟨x∗,x⟩{\displaystyle \langle x^{*},x\rangle }denotes thedot productofx∗{\displaystyle x^{*}}andx{\displaystyle x}.
The Legendre transformation is an application of thedualityrelationship between points and lines. The functional relationship specified byf{\displaystyle f}can be represented equally well as a set of(x,y){\displaystyle (x,y)}points, or as a set of tangent lines specified by their slope and intercept values.
For a differentiable convex functionf{\displaystyle f}on the real line with the first derivativef′{\displaystyle f'}and its inverse(f′)−1{\displaystyle (f')^{-1}}, the Legendre transform off{\displaystyle f},f∗{\displaystyle f^{*}}, can be specified, up to an additive constant, by the condition that the functions' first derivatives are inverse functions of each other, i.e.,f′=((f∗)′)−1{\displaystyle f'=((f^{*})')^{-1}}and(f∗)′=(f′)−1{\displaystyle (f^{*})'=(f')^{-1}}.
To see this, first note that iff{\displaystyle f}as a convex function on the real line is differentiable andx¯{\displaystyle {\overline {x}}}is acritical pointof the function ofx↦p⋅x−f(x){\displaystyle x\mapsto p\cdot x-f(x)}, then the supremum is achieved atx¯{\textstyle {\overline {x}}}(by convexity, see the first figure in this Wikipedia page). Therefore, the Legendre transform off{\displaystyle f}isf∗(p)=p⋅x¯−f(x¯){\displaystyle f^{*}(p)=p\cdot {\overline {x}}-f({\overline {x}})}.
Then, suppose that the first derivativef′{\displaystyle f'}is invertible and let the inverse beg=(f′)−1{\displaystyle g=(f')^{-1}}. Then for eachp{\textstyle p}, the pointg(p){\displaystyle g(p)}is the unique critical pointx¯{\textstyle {\overline {x}}}of the functionx↦px−f(x){\displaystyle x\mapsto px-f(x)}(i.e.,x¯=g(p){\displaystyle {\overline {x}}=g(p)}) becausef′(g(p))=p{\displaystyle f'(g(p))=p}and the function's first derivative with respect tox{\displaystyle x}atg(p){\displaystyle g(p)}isp−f′(g(p))=0{\displaystyle p-f'(g(p))=0}. Hence we havef∗(p)=p⋅g(p)−f(g(p)){\displaystyle f^{*}(p)=p\cdot g(p)-f(g(p))}for eachp{\textstyle p}. By differentiating with respect top{\textstyle p}, we find(f∗)′(p)=g(p)+p⋅g′(p)−f′(g(p))⋅g′(p).{\displaystyle (f^{*})'(p)=g(p)+p\cdot g'(p)-f'(g(p))\cdot g'(p).}Sincef′(g(p))=p{\displaystyle f'(g(p))=p}this simplifies to(f∗)′(p)=g(p)=(f′)−1(p){\displaystyle (f^{*})'(p)=g(p)=(f')^{-1}(p)}. In other words,(f∗)′{\displaystyle (f^{*})'}andf′{\displaystyle f'}are inverses to each other.
In general, ifh′=(f′)−1{\displaystyle h'=(f')^{-1}}as the inverse off′,{\displaystyle f',}thenh′=(f∗)′{\displaystyle h'=(f^{*})'}so integration givesf∗=h+c.{\displaystyle f^{*}=h+c.}with a constantc.{\displaystyle c.}
In practical terms, givenf(x),{\displaystyle f(x),}the parametric plot ofxf′(x)−f(x){\displaystyle xf'(x)-f(x)}versusf′(x){\displaystyle f'(x)}amounts to the graph off∗(p){\displaystyle f^{*}(p)}versusp.{\displaystyle p.}
In some cases (e.g. thermodynamic potentials, below), a non-standard requirement is used, amounting to an alternative definition off*with aminus sign,f(x)−f∗(p)=xp.{\displaystyle f(x)-f^{*}(p)=xp.}
In analytical mechanics and thermodynamics, Legendre transformation is usually defined as follows: supposef{\displaystyle f}is a function ofx{\displaystyle x}; then we have
Performing the Legendre transformation on this function means that we takep=dfdx{\displaystyle p={\frac {\mathrm {d} f}{\mathrm {d} x}}}as the independent variable, so that the above expression can be written as
and according to Leibniz's ruled(uv)=udv+vdu,{\displaystyle \mathrm {d} (uv)=u\mathrm {d} v+v\mathrm {d} u,}we then have
and takingf∗=xp−f,{\displaystyle f^{*}=xp-f,}we havedf∗=xdp,{\displaystyle \mathrm {d} f^{*}=x\mathrm {d} p,}which means
Whenf{\displaystyle f}is a function ofn{\displaystyle n}variablesx1,x2,⋯,xn{\displaystyle x_{1},x_{2},\cdots ,x_{n}}, then we can perform the Legendre transformation on each one or several variables: we have
wherepi=∂f∂xi.{\displaystyle p_{i}={\frac {\partial f}{\partial x_{i}}}.}Then if we want to perform the Legendre transformation on, e.g.x1{\displaystyle x_{1}}, then we takep1{\displaystyle p_{1}}together withx2,⋯,xn{\displaystyle x_{2},\cdots ,x_{n}}as independent variables, and with Leibniz's rule we have
So for the functionφ(p1,x2,⋯,xn)=f(x1,x2,⋯,xn)−x1p1,{\displaystyle \varphi (p_{1},x_{2},\cdots ,x_{n})=f(x_{1},x_{2},\cdots ,x_{n})-x_{1}p_{1},}we have
We can also do this transformation for variablesx2,⋯,xn{\displaystyle x_{2},\cdots ,x_{n}}. If we do it to all the variables, then we have
In analytical mechanics, people perform this transformation on variablesq˙1,q˙2,⋯,q˙n{\displaystyle {\dot {q}}_{1},{\dot {q}}_{2},\cdots ,{\dot {q}}_{n}}of the LagrangianL(q1,⋯,qn,q˙1,⋯,q˙n){\displaystyle L(q_{1},\cdots ,q_{n},{\dot {q}}_{1},\cdots ,{\dot {q}}_{n})}to get the Hamiltonian:
H(q1,⋯,qn,p1,⋯,pn)=∑i=1npiq˙i−L(q1,⋯,qn,q˙1⋯,q˙n).{\displaystyle H(q_{1},\cdots ,q_{n},p_{1},\cdots ,p_{n})=\sum _{i=1}^{n}p_{i}{\dot {q}}_{i}-L(q_{1},\cdots ,q_{n},{\dot {q}}_{1}\cdots ,{\dot {q}}_{n}).}
In thermodynamics, people perform this transformation on variables according to the type of thermodynamic system they want; for example, starting from the cardinal function of state, the internal energyU(S,V){\displaystyle U(S,V)}, we have
so we can perform the Legendre transformation on either or both ofS,V{\displaystyle S,V}to yield
and each of these three expressions has a physical meaning.
This definition of the Legendre transformation is the one originally introduced by Legendre in his work in 1787,[1]and is still applied by physicists nowadays. Indeed, this definition can be mathematically rigorous if we treat all the variables and functions defined above: for example,f,x1,⋯,xn,p1,⋯,pn,{\displaystyle f,x_{1},\cdots ,x_{n},p_{1},\cdots ,p_{n},}as differentiable functions defined on an open set ofRn{\displaystyle \mathbb {R} ^{n}}or on a differentiable manifold, anddf,dxi,dpi{\displaystyle \mathrm {d} f,\mathrm {d} x_{i},\mathrm {d} p_{i}}their differentials (which are treated as cotangent vector field in the context of differentiable manifold). This definition is equivalent to the modern mathematicians' definition as long asf{\displaystyle f}is differentiable and convex for the variablesx1,x2,⋯,xn.{\displaystyle x_{1},x_{2},\cdots ,x_{n}.}
As shownabove, for a convex functionf(x){\displaystyle f(x)}, withx=x¯{\displaystyle x={\bar {x}}}maximizing or makingpx−f(x){\displaystyle px-f(x)}bounded at eachp{\displaystyle p}to define the Legendre transformf∗(p)=px¯−f(x¯){\displaystyle f^{*}(p)=p{\bar {x}}-f({\bar {x}})}and withg≡(f′)−1{\displaystyle g\equiv (f')^{-1}}, the following identities hold.
Consider theexponential functionf(x)=ex,{\displaystyle f(x)=e^{x},}which has the domainI=R{\displaystyle I=\mathbb {R} }. From the definition, the Legendre transform isf∗(x∗)=supx∈R(x∗x−ex),x∗∈I∗{\displaystyle f^{*}(x^{*})=\sup _{x\in \mathbb {R} }(x^{*}x-e^{x}),\quad x^{*}\in I^{*}}whereI∗{\displaystyle I^{*}}remains to be determined. To evaluate thesupremum, compute the derivative ofx∗x−ex{\displaystyle x^{*}x-e^{x}}with respect tox{\displaystyle x}and set equal to zero:ddx(x∗x−ex)=x∗−ex=0.{\displaystyle {\frac {d}{dx}}(x^{*}x-e^{x})=x^{*}-e^{x}=0.}Thesecond derivative−ex{\displaystyle -e^{x}}is negative everywhere, so the maximal value is achieved atx=ln(x∗){\displaystyle x=\ln(x^{*})}. Thus, the Legendre transform isf∗(x∗)=x∗ln(x∗)−eln(x∗)=x∗(ln(x∗)−1){\displaystyle f^{*}(x^{*})=x^{*}\ln(x^{*})-e^{\ln(x^{*})}=x^{*}(\ln(x^{*})-1)}and has domainI∗=(0,∞).{\displaystyle I^{*}=(0,\infty ).}This illustrates that thedomainsof a function and its Legendre transform can be different.
To find the Legendre transformation of the Legendre transformation off{\displaystyle f},f∗∗(x)=supx∗∈R(xx∗−x∗(ln(x∗)−1)),x∈I,{\displaystyle f^{**}(x)=\sup _{x^{*}\in \mathbb {R} }(xx^{*}-x^{*}(\ln(x^{*})-1)),\quad x\in I,}where a variablex{\displaystyle x}is intentionally used as the argument of the functionf∗∗{\displaystyle f^{**}}to show theinvolutionproperty of the Legendre transform asf∗∗=f{\displaystyle f^{**}=f}. we compute0=ddx∗(xx∗−x∗(ln(x∗)−1))=x−ln(x∗){\displaystyle {\begin{aligned}0&={\frac {d}{dx^{*}}}{\big (}xx^{*}-x^{*}(\ln(x^{*})-1){\big )}=x-\ln(x^{*})\end{aligned}}}thus the maximum occurs atx∗=ex{\displaystyle x^{*}=e^{x}}because the second derivatived2dx∗2f∗∗(x)=−1x∗<0{\displaystyle {\frac {d^{2}}{{dx^{*}}^{2}}}f^{**}(x)=-{\frac {1}{x^{*}}}<0}over the domain off∗∗{\displaystyle f^{**}}asI∗=(0,∞).{\displaystyle I^{*}=(0,\infty ).}As a result,f∗∗{\displaystyle f^{**}}is found asf∗∗(x)=xex−ex(ln(ex)−1)=ex,{\displaystyle {\begin{aligned}f^{**}(x)&=xe^{x}-e^{x}(\ln(e^{x})-1)=e^{x},\end{aligned}}}thereby confirming thatf=f∗∗,{\displaystyle f=f^{**},}as expected.
Letf(x) =cx2defined onR, wherec> 0is a fixed constant.
Forx*fixed, the function ofx,x*x−f(x) =x*x−cx2has the first derivativex* − 2cxand second derivative−2c; there is one stationary point atx=x*/2c, which is always a maximum.
Thus,I* =Randf∗(x∗)=x∗24c.{\displaystyle f^{*}(x^{*})={\frac {{x^{*}}^{2}}{4c}}~.}
The first derivatives off, 2cx, and off*,x*/(2c), are inverse functions to each other. Clearly, furthermore,f∗∗(x)=14(1/4c)x2=cx2,{\displaystyle f^{**}(x)={\frac {1}{4(1/4c)}}x^{2}=cx^{2}~,}namelyf** =f.
Letf(x) =x2forx∈ (I= [2, 3]).
Forx*fixed,x*x−f(x)is continuous onIcompact, hence it always takes a finite maximum on it; it follows that the domain of the Legendre transform off{\displaystyle f}isI* =R.
The stationary point atx=x*/2(found by setting that the first derivative ofx*x−f(x)with respect tox{\displaystyle x}equal to zero) is in the domain[2, 3]if and only if4 ≤x* ≤ 6. Otherwise the maximum is taken either atx= 2orx= 3because the second derivative ofx*x−f(x)with respect tox{\displaystyle x}is negative as−2{\displaystyle -2}; for a part of the domainx∗<4{\displaystyle x^{*}<4}the maximum thatx*x−f(x)can take with respect tox∈[2,3]{\displaystyle x\in [2,3]}is obtained atx=2{\displaystyle x=2}while forx∗>6{\displaystyle x^{*}>6}it becomes the maximum atx=3{\displaystyle x=3}. Thus, it follows thatf∗(x∗)={2x∗−4,x∗<4x∗24,4≤x∗≤6,3x∗−9,x∗>6.{\displaystyle f^{*}(x^{*})={\begin{cases}2x^{*}-4,&x^{*}<4\\{\frac {{x^{*}}^{2}}{4}},&4\leq x^{*}\leq 6,\\3x^{*}-9,&x^{*}>6.\end{cases}}}
The functionf(x) =cxis convex, for everyx(strict convexity is not required for the Legendre transformation to be well defined). Clearlyx*x−f(x) = (x* −c)xis neverbounded from aboveas a function ofx, unlessx* −c= 0. Hencef*is defined onI* = {c}andf*(c) = 0. (The definition of the Legendre transformrequires the existence of thesupremum, that requires upper bounds.)
One may check involutivity: of course,x*x−f*(x*)is always bounded as a function ofx*∈{c}, henceI** =R. Then, for allxone hassupx∗∈{c}(xx∗−f∗(x∗))=xc,{\displaystyle \sup _{x^{*}\in \{c\}}(xx^{*}-f^{*}(x^{*}))=xc,}and hencef**(x) =cx=f(x).
As an example of a convex continuous function that is not everywhere differentiable, considerf(x)=|x|{\displaystyle f(x)=|x|}. This givesf∗(x∗)=supx(xx∗−|x|)=max(supx≥0x(x∗−1),supx≤0x(x∗+1)),{\displaystyle f^{*}(x^{*})=\sup _{x}(xx^{*}-|x|)=\max \left(\sup _{x\geq 0}x(x^{*}-1),\,\sup _{x\leq 0}x(x^{*}+1)\right),}and thusf∗(x∗)=0{\displaystyle f^{*}(x^{*})=0}on its domainI∗=[−1,1]{\displaystyle I^{*}=[-1,1]}.
Letf(x)=⟨x,Ax⟩+c{\displaystyle f(x)=\langle x,Ax\rangle +c}be defined onX=Rn, whereAis a real, positive definite matrix.
Thenfis convex, and⟨p,x⟩−f(x)=⟨p,x⟩−⟨x,Ax⟩−c,{\displaystyle \langle p,x\rangle -f(x)=\langle p,x\rangle -\langle x,Ax\rangle -c,}has gradientp− 2AxandHessian−2A, which is negative; hence the stationary pointx=A−1p/2is a maximum.
We haveX* =Rn, andf∗(p)=14⟨p,A−1p⟩−c.{\displaystyle f^{*}(p)={\frac {1}{4}}\langle p,A^{-1}p\rangle -c.}
The Legendre transform is linked tointegration by parts,p dx=d(px) −x dp.
Letf(x,y)be a function of two independent variablesxandy, with the differentialdf=∂f∂xdx+∂f∂ydy=pdx+vdy.{\displaystyle df={\frac {\partial f}{\partial x}}\,dx+{\frac {\partial f}{\partial y}}\,dy=p\,dx+v\,dy.}
Assume that the functionfis convex inxfor ally, so that one may perform the Legendre transform onfinx, withpthe variable conjugate tox(for information, there is a relation∂f∂x|x¯=p{\displaystyle {\frac {\partial f}{\partial x}}|_{\bar {x}}=p}wherex¯{\displaystyle {\bar {x}}}is a point inxmaximizing or makingpx−f(x,y){\displaystyle px-f(x,y)}bounded for givenpandy). Since the new independent variable of the transform with respect tofisp, the differentialsdxanddyindfdevolve todpanddyin the differential of the transform, i.e., we build another function with its differential expressed in terms of the new basisdpanddy.
We thus consider the functiong(p,y) =f−pxso thatdg=df−pdx−xdp=−xdp+vdy{\displaystyle dg=df-p\,dx-x\,dp=-x\,dp+v\,dy}x=−∂g∂p{\displaystyle x=-{\frac {\partial g}{\partial p}}}v=∂g∂y.{\displaystyle v={\frac {\partial g}{\partial y}}.}
The function−g(p,y)is the Legendre transform off(x,y), where only the independent variablexhas been supplanted byp. This is widely used inthermodynamics, as illustrated below.
A Legendre transform is used inclassical mechanicsto derive theHamiltonian formulationfrom theLagrangian formulation, and conversely. A typical Lagrangian has the form
L(v,q)=12⟨v,Mv⟩−V(q),{\displaystyle L(v,q)={\tfrac {1}{2}}\langle v,Mv\rangle -V(q),}where(v,q){\displaystyle (v,q)}are coordinates onRn×Rn,Mis a positive definite real matrix, and⟨x,y⟩=∑jxjyj.{\displaystyle \langle x,y\rangle =\sum _{j}x_{j}y_{j}.}
For everyqfixed,L(v,q){\displaystyle L(v,q)}is a convex function ofv{\displaystyle v}, whileV(q){\displaystyle V(q)}plays the role of a constant.
Hence the Legendre transform ofL(v,q){\displaystyle L(v,q)}as a function ofv{\displaystyle v}is the Hamiltonian function,H(p,q)=12⟨p,M−1p⟩+V(q).{\displaystyle H(p,q)={\tfrac {1}{2}}\langle p,M^{-1}p\rangle +V(q).}
In a more general setting,(v,q){\displaystyle (v,q)}are local coordinates on thetangent bundleTM{\displaystyle T{\mathcal {M}}}of a manifoldM{\displaystyle {\mathcal {M}}}. For eachq,L(v,q){\displaystyle L(v,q)}is a convex function of the tangent spaceVq. The Legendre transform gives the HamiltonianH(p,q){\displaystyle H(p,q)}as a function of the coordinates(p,q)of thecotangent bundleT∗M{\displaystyle T^{*}{\mathcal {M}}}; the inner product used to define the Legendre transform is inherited from the pertinent canonicalsymplectic structure. In this abstract setting, the Legendre transformation corresponds to thetautological one-form.[further explanation needed]
The strategy behind the use of Legendre transforms in thermodynamics is to shift from a function that depends on a variable to a new (conjugate) function that depends on a new variable, the conjugate of the original one. The new variable is the partial derivative of the original function with respect to the original variable. The new function is the difference between the original function and the product of the old and new variables. Typically, this transformation is useful because it shifts the dependence of, e.g., the energy from anextensive variableto its conjugate intensive variable, which can often be controlled more easily in a physical experiment.
For example, theinternal energyUis an explicit function of theextensive variablesentropyS,volumeV, andchemical compositionNi(e.g.,i=1,2,3,…{\displaystyle i=1,2,3,\ldots })U=U(S,V,{Ni}),{\displaystyle U=U\left(S,V,\{N_{i}\}\right),}which has a total differentialdU=TdS−PdV+∑μidNi{\displaystyle dU=T\,dS-P\,dV+\sum \mu _{i}\,dN_{i}}
whereT=∂U∂S|V,Niforallivalues,P=−∂U∂V|S,Niforallivalues,μi=∂U∂Ni|S,V,Njforallj≠i{\displaystyle T=\left.{\frac {\partial U}{\partial S}}\right\vert _{V,N_{i\ for\ all\ i\ values}},P=\left.-{\frac {\partial U}{\partial V}}\right\vert _{S,N_{i\ for\ all\ i\ values}},\mu _{i}=\left.{\frac {\partial U}{\partial N_{i}}}\right\vert _{S,V,N_{j\ for\ all\ j\neq i}}}.
(Subscripts are not necessary by the definition of partial derivatives but left here for clarifying variables.) Stipulating some common reference state, by using the (non-standard) Legendre transform of the internal energyUwith respect to volumeV, theenthalpyHmay be obtained as the following.
To get the (standard) Legendre transformU∗{\textstyle U^{*}}of the internal energyUwith respect to volumeV, the functionu(p,S,V,{Ni})=pV−U{\textstyle u\left(p,S,V,\{{{N}_{i}}\}\right)=pV-U}is defined first, then it shall be maximized or bounded byV. To do this, the condition∂u∂V=p−∂U∂V=0→p=∂U∂V{\textstyle {\frac {\partial u}{\partial V}}=p-{\frac {\partial U}{\partial V}}=0\to p={\frac {\partial U}{\partial V}}}needs to be satisfied, soU∗=∂U∂VV−U{\textstyle U^{*}={\frac {\partial U}{\partial V}}V-U}is obtained. This approach is justified becauseUis a linear function with respect toV(so a convex function onV) by the definition ofextensive variables. The non-standard Legendre transform here is obtained by negating the standard version, so−U∗=H=U−∂U∂VV=U+PV{\textstyle -U^{*}=H=U-{\frac {\partial U}{\partial V}}V=U+PV}.
His definitely astate functionas it is obtained by addingPV(PandVasstate variables) to a state functionU=U(S,V,{Ni}){\textstyle U=U\left(S,V,\{N_{i}\}\right)}, so its differential is anexact differential. Because ofdH=TdS+VdP+∑μidNi{\textstyle dH=T\,dS+V\,dP+\sum \mu _{i}\,dN_{i}}and the fact that it must be an exact differential,H=H(S,P,{Ni}){\displaystyle H=H(S,P,\{N_{i}\})}.
The enthalpy is suitable for description of processes in which the pressure is controlled from the surroundings.
It is likewise possible to shift the dependence of the energy from the extensive variable of entropy,S, to the (often more convenient) intensive variableT, resulting in theHelmholtzandGibbsfree energies. The Helmholtz free energyA, and Gibbs energyG, are obtained by performing Legendre transforms of the internal energy and enthalpy, respectively,A=U−TS,{\displaystyle A=U-TS~,}G=H−TS=U+PV−TS.{\displaystyle G=H-TS=U+PV-TS~.}
The Helmholtz free energy is often the most useful thermodynamic potential when temperature and volume are controlled from the surroundings, while the Gibbs energy is often the most useful when temperature and pressure are controlled from the surroundings.
As another example fromphysics, consider a parallel conductive platecapacitor, in which the plates can move relative to one another. Such a capacitor would allow transfer of the electric energy which is stored in the capacitor into external mechanical work, done by theforceacting on the plates. One may think of the electric charge as analogous to the "charge" of agasin acylinder, with the resulting mechanicalforceexerted on apiston.
Compute the force on the plates as a function ofx, the distance which separates them. To find the force, compute the potential energy, and then apply the definition of force as the gradient of the potential energy function.
Theelectrostatic potential energystored in a capacitor of thecapacitanceC(x)and a positiveelectric charge+Qor negative charge-Qon each conductive plate is (with using the definition of the capacitance asC=QV{\textstyle C={\frac {Q}{V}}}),
U(Q,x)=12QV(Q,x)=12Q2C(x),{\displaystyle U(Q,\mathbf {x} )={\frac {1}{2}}QV(Q,\mathbf {x} )={\frac {1}{2}}{\frac {Q^{2}}{C(\mathbf {x} )}},~}
where the dependence on the area of the plates, the dielectric constant of the insulation material between the plates, and the separationxare abstracted away as thecapacitanceC(x). (For a parallel plate capacitor, this is proportional to the area of the plates and inversely proportional to the separation.)
The forceFbetween the plates due to the electric field created by the charge separation is thenF(x)=−dUdx.{\displaystyle \mathbf {F} (\mathbf {x} )=-{\frac {dU}{d\mathbf {x} }}~.}
If the capacitor is not connected to any electric circuit, then theelectric chargeson the plates remain constant and the voltage varies when the plates move with respect to each other, and the force is the negativegradientof theelectrostaticpotential energy asF(x)=12dC(x)dxQ2C(x)2=12dC(x)dxV(x)2{\displaystyle \mathbf {F} (\mathbf {x} )={\frac {1}{2}}{\frac {dC(\mathbf {x} )}{d\mathbf {x} }}{\frac {Q^{2}}{{C(\mathbf {x} )}^{2}}}={\frac {1}{2}}{\frac {dC(\mathbf {x} )}{d\mathbf {x} }}V(\mathbf {x} )^{2}}
whereV(Q,x)=V(x){\textstyle V(Q,\mathbf {x} )=V(\mathbf {x} )}as the charge is fixed in this configuration.
However, instead, suppose that thevoltagebetween the platesVis maintained constant as the plate moves by connection to abattery, which is a reservoir for electric charges at a constant potential difference. Then the amount ofchargesQ{\textstyle Q}is a variableinstead of the voltage;Q{\textstyle Q}andV{\textstyle V}are the Legendre conjugate to each other. To find the force, first compute the non-standard Legendre transformU∗{\textstyle U^{*}}with respect toQ{\textstyle Q}(also with usingC=QV{\textstyle C={\frac {Q}{V}}}),
U∗=U−∂U∂Q|x⋅Q=U−12C(x)∂Q2∂Q|x⋅Q=U−QV=12QV−QV=−12QV=−12V2C(x).{\displaystyle U^{*}=U-\left.{\frac {\partial U}{\partial Q}}\right|_{\mathbf {x} }\cdot Q=U-{\frac {1}{2C(\mathbf {x} )}}\left.{\frac {\partial Q^{2}}{\partial Q}}\right|_{\mathbf {x} }\cdot Q=U-QV={\frac {1}{2}}QV-QV=-{\frac {1}{2}}QV=-{\frac {1}{2}}V^{2}C(\mathbf {x} ).}
This transformation is possible becauseU{\textstyle U}is now a linear function ofQ{\textstyle Q}so is convex on it. The force now becomes the negative gradient of this Legendre transform, resulting in the same force obtained from the original functionU{\textstyle U},F(x)=−dU∗dx=12dC(x)dxV2.{\displaystyle \mathbf {F} (\mathbf {x} )=-{\frac {dU^{*}}{d\mathbf {x} }}={\frac {1}{2}}{\frac {dC(\mathbf {x} )}{d\mathbf {x} }}V^{2}.}
The two conjugate energiesU{\textstyle U}andU∗{\textstyle U^{*}}happen to stand opposite to each other (their signs are opposite), only because of thelinearityof thecapacitance—except nowQis no longer a constant. They reflect the two different pathways of storing energy into the capacitor, resulting in, for instance, the same "pull" between a capacitor's plates.
Inlarge deviations theory, therate functionis defined as the Legendre transformation of the logarithm of themoment generating functionof a random variable. An important application of the rate function is in the calculation of tail probabilities of sums ofi.i.d. random variables, in particular inCramér's theorem.
IfXn{\displaystyle X_{n}}are i.i.d. random variables, letSn=X1+⋯+Xn{\displaystyle S_{n}=X_{1}+\cdots +X_{n}}be the associatedrandom walkandM(ξ){\displaystyle M(\xi )}the moment generating function ofX1{\displaystyle X_{1}}. Forξ∈R{\displaystyle \xi \in \mathbb {R} },E[eξSn]=M(ξ)n{\displaystyle E[e^{\xi S_{n}}]=M(\xi )^{n}}. Hence, byMarkov's inequality, one has forξ≥0{\displaystyle \xi \geq 0}anda∈R{\displaystyle a\in \mathbb {R} }P(Sn/n>a)≤e−nξaM(ξ)n=exp[−n(ξa−Λ(ξ))]{\displaystyle P(S_{n}/n>a)\leq e^{-n\xi a}M(\xi )^{n}=\exp[-n(\xi a-\Lambda (\xi ))]}whereΛ(ξ)=logM(ξ){\displaystyle \Lambda (\xi )=\log M(\xi )}. Since the left-hand side is independent ofξ{\displaystyle \xi }, we may take the infimum of the right-hand side, which leads one to consider the supremum ofξa−Λ(ξ){\displaystyle \xi a-\Lambda (\xi )}, i.e., the Legendre transform ofΛ{\displaystyle \Lambda }, evaluated atx=a{\displaystyle x=a}.
Legendre transformation arises naturally inmicroeconomicsin the process of finding thesupplyS(P)of some product given a fixed pricePon the market knowing thecost functionC(Q), i.e. the cost for the producer to make/mine/etc.Qunits of the given product.
A simple theory explains the shape of the supply curve based solely on the cost function. Let us suppose the market price for a one unit of our product isP. For a company selling this good, the best strategy is to adjust the productionQso that its profit is maximized. We can maximize the profitprofit=revenue−costs=PQ−C(Q){\displaystyle {\text{profit}}={\text{revenue}}-{\text{costs}}=PQ-C(Q)}by differentiating with respect toQand solvingP−C′(Qopt)=0.{\displaystyle P-C'(Q_{\text{opt}})=0.}
Qoptrepresents the optimal quantityQof goods that the producer is willing to supply, which is indeed the supply itself:S(P)=Qopt(P)=(C′)−1(P).{\displaystyle S(P)=Q_{\text{opt}}(P)=(C')^{-1}(P).}
If we consider the maximal profit as a function of price,profitmax(P){\displaystyle {\text{profit}}_{\text{max}}(P)}, we see that it is the Legendre transform of the cost functionC(Q){\displaystyle C(Q)}.
For astrictly convex function, the Legendre transformation can be interpreted as a mapping between thegraphof the function and the family oftangentsof the graph. (For a function of one variable, the tangents are well-defined at all but at mostcountably manypoints, since a convex function isdifferentiableat all but at most countably many points.)
The equation of a line withslopep{\displaystyle p}andy{\displaystyle y}-interceptb{\displaystyle b}is given byy=px+b{\displaystyle y=px+b}. For this line to be tangent to the graph of a functionf{\displaystyle f}at the point(x0,f(x0)){\displaystyle \left(x_{0},f(x_{0})\right)}requiresf(x0)=px0+b{\displaystyle f(x_{0})=px_{0}+b}andp=f′(x0).{\displaystyle p=f'(x_{0}).}
Being the derivative of a strictly convex function, the functionf′{\displaystyle f'}is strictly monotone and thusinjective. The second equation can be solved forx0=f′−1(p),{\textstyle x_{0}=f^{\prime -1}(p),}allowing elimination ofx0{\displaystyle x_{0}}from the first, and solving for they{\displaystyle y}-interceptb{\displaystyle b}of the tangent as a function of its slopep,{\displaystyle p,}b=f(x0)−px0=f(f′−1(p))−p⋅f′−1(p)=−f⋆(p){\textstyle b=f(x_{0})-px_{0}=f\left(f^{\prime -1}(p)\right)-p\cdot f^{\prime -1}(p)=-f^{\star }(p)}wheref⋆{\displaystyle f^{\star }}denotes the Legendre transform off.{\displaystyle f.}
Thefamilyof tangent lines of the graph off{\displaystyle f}parameterized by the slopep{\displaystyle p}is therefore given byy=px−f⋆(p),{\textstyle y=px-f^{\star }(p),}or, written implicitly, by the solutions of the equationF(x,y,p)=y+f⋆(p)−px=0.{\displaystyle F(x,y,p)=y+f^{\star }(p)-px=0~.}
The graph of the original function can be reconstructed from this family of lines as theenvelopeof this family by demanding∂F(x,y,p)∂p=f⋆′(p)−x=0.{\displaystyle {\frac {\partial F(x,y,p)}{\partial p}}=f^{\star \prime }(p)-x=0.}
Eliminatingp{\displaystyle p}from these two equations givesy=x⋅f⋆′−1(x)−f⋆(f⋆′−1(x)).{\displaystyle y=x\cdot f^{\star \prime -1}(x)-f^{\star }\left(f^{\star \prime -1}(x)\right).}
Identifyingy{\displaystyle y}withf(x){\displaystyle f(x)}and recognizing the right side of the preceding equation as the Legendre transform off⋆,{\displaystyle f^{\star },}yieldf(x)=f⋆⋆(x).{\textstyle f(x)=f^{\star \star }(x)~.}
For a differentiable real-valued function on anopenconvex subsetUofRnthe Legendre conjugate of the pair(U,f)is defined to be the pair(V,g), whereVis the image ofUunder thegradientmappingDf, andgis the function onVgiven by the formulag(y)=⟨y,x⟩−f(x),x=(Df)−1(y){\displaystyle g(y)=\left\langle y,x\right\rangle -f(x),\qquad x=\left(Df\right)^{-1}(y)}where⟨u,v⟩=∑k=1nuk⋅vk{\displaystyle \left\langle u,v\right\rangle =\sum _{k=1}^{n}u_{k}\cdot v_{k}}
is thescalar productonRn. The multidimensional transform can be interpreted as an encoding of theconvex hullof the function'sepigraphin terms of itssupporting hyperplanes.[2]This can be seen as consequence of the following two observations. On the one hand, the hyperplane tangent to the epigraph off{\displaystyle f}at some point(x,f(x))∈U×R{\displaystyle (\mathbf {x} ,f(\mathbf {x} ))\in U\times \mathbb {R} }has normal vector(∇f(x),−1)∈Rn+1{\displaystyle (\nabla f(\mathbf {x} ),-1)\in \mathbb {R} ^{n+1}}. On the other hand, any closed convex setC∈Rm{\displaystyle C\in \mathbb {R} ^{m}}can be characterized via the set of itssupporting hyperplanesby the equationsx⋅n=hC(n){\displaystyle \mathbf {x} \cdot \mathbf {n} =h_{C}(\mathbf {n} )}, wherehC(n){\displaystyle h_{C}(\mathbf {n} )}is thesupport functionofC{\displaystyle C}. But the definition of Legendre transform via the maximization matches precisely that of the support function, that is,f∗(x)=hepi(f)(x,−1){\displaystyle f^{*}(\mathbf {x} )=h_{\operatorname {epi} (f)}(\mathbf {x} ,-1)}. We thus conclude that the Legendre transform characterizes the epigraph in the sense that the tangent plane to the epigraph at any point(x,f(x)){\displaystyle (\mathbf {x} ,f(\mathbf {x} ))}is given explicitly by{z∈Rn+1:z⋅x=f∗(x)}.{\displaystyle \{\mathbf {z} \in \mathbb {R} ^{n+1}:\,\,\mathbf {z} \cdot \mathbf {x} =f^{*}(\mathbf {x} )\}.}
Alternatively, ifXis avector spaceandYis itsdual vector space, then for each pointxofXandyofY, there is a natural identification of thecotangent spacesT*XxwithYandT*YywithX. Iffis a real differentiable function overX, then itsexterior derivative,df, is a section of thecotangent bundleT*Xand as such, we can construct a map fromXtoY. Similarly, ifgis a real differentiable function overY, thendgdefines a map fromYtoX. If both maps happen to be inverses of each other, we say we have a Legendre transform. The notion of thetautological one-formis commonly used in this setting.
When the function is not differentiable, the Legendre transform can still be extended, and is known as theLegendre-Fenchel transformation. In this more general setting, a few properties are lost: for example, the Legendre transform is no longer its own inverse (unless there are extra assumptions, likeconvexity).
LetM{\textstyle M}be asmooth manifold, letE{\displaystyle E}andπ:E→M{\textstyle \pi :E\to M}be avector bundleonM{\displaystyle M}and its associatedbundle projection, respectively. LetL:E→R{\textstyle L:E\to \mathbb {R} }be a smooth function. We think ofL{\textstyle L}as aLagrangianby analogy with the classical case whereM=R{\textstyle M=\mathbb {R} },E=TM=R×R{\textstyle E=TM=\mathbb {R} \times \mathbb {R} }andL(x,v)=12mv2−V(x){\textstyle L(x,v)={\frac {1}{2}}mv^{2}-V(x)}for some positive numberm∈R{\textstyle m\in \mathbb {R} }and functionV:M→R{\textstyle V:M\to \mathbb {R} }.
As usual, thedualofE{\textstyle E}is denote byE∗{\textstyle E^{*}}. The fiber ofπ{\textstyle \pi }overx∈M{\textstyle x\in M}is denotedEx{\textstyle E_{x}}, and the restriction ofL{\textstyle L}toEx{\textstyle E_{x}}is denoted byL|Ex:Ex→R{\textstyle L|_{E_{x}}:E_{x}\to \mathbb {R} }. TheLegendre transformationofL{\textstyle L}is the smooth morphismFL:E→E∗{\displaystyle \mathbf {F} L:E\to E^{*}}defined byFL(v)=d(L|Ex)v∈Ex∗{\textstyle \mathbf {F} L(v)=d(L|_{E_{x}})_{v}\in E_{x}^{*}}, wherex=π(v){\textstyle x=\pi (v)}. Here we use the fact that sinceEx{\textstyle E_{x}}is a vector space,Tv(Ex){\textstyle T_{v}(E_{x})}can be identified withEx{\textstyle E_{x}}.
In other words,FL(v)∈Ex∗{\textstyle \mathbf {F} L(v)\in E_{x}^{*}}is the covector that sendsw∈Ex{\textstyle w\in E_{x}}to the directional derivativeddt|t=0L(v+tw)∈R{\textstyle \left.{\frac {d}{dt}}\right|_{t=0}L(v+tw)\in \mathbb {R} }.
To describe the Legendre transformation locally, letU⊆M{\textstyle U\subseteq M}be a coordinate chart over whichE{\textstyle E}is trivial. Picking a trivialization ofE{\textstyle E}overU{\textstyle U}, we obtain chartsEU≅U×Rr{\textstyle E_{U}\cong U\times \mathbb {R} ^{r}}andEU∗≅U×Rr{\textstyle E_{U}^{*}\cong U\times \mathbb {R} ^{r}}. In terms of these charts, we haveFL(x;v1,…,vr)=(x;p1,…,pr){\textstyle \mathbf {F} L(x;v_{1},\dotsc ,v_{r})=(x;p_{1},\dotsc ,p_{r})}, wherepi=∂L∂vi(x;v1,…,vr){\displaystyle p_{i}={\frac {\partial L}{\partial v_{i}}}(x;v_{1},\dotsc ,v_{r})}for alli=1,…,r{\textstyle i=1,\dots ,r}. If, as in the classical case, the restriction ofL:E→R{\textstyle L:E\to \mathbb {R} }to each fiberEx{\textstyle E_{x}}is strictly convex and bounded below by a positive definite quadratic form minus a constant, then the Legendre transformFL:E→E∗{\textstyle \mathbf {F} L:E\to E^{*}}is a diffeomorphism.[3]Suppose thatFL{\textstyle \mathbf {F} L}is a diffeomorphism and letH:E∗→R{\textstyle H:E^{*}\to \mathbb {R} }be the "Hamiltonian" function defined byH(p)=p⋅v−L(v),{\displaystyle H(p)=p\cdot v-L(v),}wherev=(FL)−1(p){\textstyle v=(\mathbf {F} L)^{-1}(p)}. Using the natural isomorphismE≅E∗∗{\textstyle E\cong E^{**}}, we may view the Legendre transformation ofH{\textstyle H}as a mapFH:E∗→E{\textstyle \mathbf {F} H:E^{*}\to E}. Then we have[3](FL)−1=FH.{\displaystyle (\mathbf {F} L)^{-1}=\mathbf {F} H.}
The Legendre transformation has the following scaling properties: Fora> 0,
f(x)=a⋅g(x)⇒f⋆(p)=a⋅g⋆(pa){\displaystyle f(x)=a\cdot g(x)\Rightarrow f^{\star }(p)=a\cdot g^{\star }\left({\frac {p}{a}}\right)}f(x)=g(a⋅x)⇒f⋆(p)=g⋆(pa).{\displaystyle f(x)=g(a\cdot x)\Rightarrow f^{\star }(p)=g^{\star }\left({\frac {p}{a}}\right).}
It follows that if a function ishomogeneous of degreerthen its image under the Legendre transformation is a homogeneous function of degrees, where1/r+ 1/s= 1. (Sincef(x) =xr/r, withr> 1, impliesf*(p) =ps/s.) Thus, the only monomial whose degree is invariant under Legendre transform is the quadratic.
f(x)=g(x)+b⇒f⋆(p)=g⋆(p)−b{\displaystyle f(x)=g(x)+b\Rightarrow f^{\star }(p)=g^{\star }(p)-b}f(x)=g(x+y)⇒f⋆(p)=g⋆(p)−p⋅y{\displaystyle f(x)=g(x+y)\Rightarrow f^{\star }(p)=g^{\star }(p)-p\cdot y}
f(x)=g−1(x)⇒f⋆(p)=−p⋅g⋆(1p){\displaystyle f(x)=g^{-1}(x)\Rightarrow f^{\star }(p)=-p\cdot g^{\star }\left({\frac {1}{p}}\right)}
LetA:Rn→Rmbe alinear transformation. For any convex functionfonRn, one has(Af)⋆=f⋆A⋆{\displaystyle (Af)^{\star }=f^{\star }A^{\star }}whereA*is theadjoint operatorofAdefined by⟨Ax,y⋆⟩=⟨x,A⋆y⋆⟩,{\displaystyle \left\langle Ax,y^{\star }\right\rangle =\left\langle x,A^{\star }y^{\star }\right\rangle ,}andAfis thepush-forwardoffalongA(Af)(y)=inf{f(x):x∈X,Ax=y}.{\displaystyle (Af)(y)=\inf\{f(x):x\in X,Ax=y\}.}
A closed convex functionfis symmetric with respect to a given setGoforthogonal linear transformations,f(Ax)=f(x),∀x,∀A∈G{\displaystyle f(Ax)=f(x),\;\forall x,\;\forall A\in G}if and only iff*is symmetric with respect toG.
Theinfimal convolutionof two functionsfandgis defined as
(f⋆infg)(x)=inf{f(x−y)+g(y)|y∈Rn}.{\displaystyle \left(f\star _{\inf }g\right)(x)=\inf \left\{f(x-y)+g(y)\,|\,y\in \mathbf {R} ^{n}\right\}.}
Letf1, ...,fmbe proper convex functions onRn. Then
(f1⋆inf⋯⋆inffm)⋆=f1⋆+⋯+fm⋆.{\displaystyle \left(f_{1}\star _{\inf }\cdots \star _{\inf }f_{m}\right)^{\star }=f_{1}^{\star }+\cdots +f_{m}^{\star }.}
For any functionfand its convex conjugatef*Fenchel's inequality(also known as theFenchel–Young inequality) holds for everyx∈Xandp∈X*, i.e.,independentx,ppairs,⟨p,x⟩≤f(x)+f⋆(p).{\displaystyle \left\langle p,x\right\rangle \leq f(x)+f^{\star }(p).}
|
https://en.wikipedia.org/wiki/Legendre_transformation
|
Inmathematics,Young's inequality for productsis amathematical inequalityabout the product of two numbers.[1]The inequality is named afterWilliam Henry Youngand should not be confused withYoung's convolution inequality.
Young's inequality for products can be used to proveHölder's inequality. It is also widely used to estimate the norm of nonlinear terms inPDE theory, since it allows one to estimate a product of two terms by a sum of the same terms raised to a power and scaled.
The standard form of the inequality is the following, which can be used to proveHölder's inequality.
Theorem—Ifa≥0{\displaystyle a\geq 0}andb≥0{\displaystyle b\geq 0}arenonnegativereal numbersand ifp>1{\displaystyle p>1}andq>1{\displaystyle q>1}are real numbers such that1p+1q=1,{\displaystyle {\frac {1}{p}}+{\frac {1}{q}}=1,}thenab≤app+bqq.{\displaystyle ab~\leq ~{\frac {a^{p}}{p}}+{\frac {b^{q}}{q}}.}
Equality holds if and only ifap=bq.{\displaystyle a^{p}=b^{q}.}
Since1p+1q=1,{\displaystyle {\tfrac {1}{p}}+{\tfrac {1}{q}}=1,}p−1=1q−1.{\displaystyle p-1={\tfrac {1}{q-1}}.}A graphy=xp−1{\displaystyle y=x^{p-1}}on thexy{\displaystyle xy}-plane is thus also a graphx=yq−1.{\displaystyle x=y^{q-1}.}From sketching a visual representation of the integrals of the area between this curve and the axes, and the area in the rectangle bounded by the linesx=0,x=a,y=0,y=b,{\displaystyle x=0,x=a,y=0,y=b,}and the fact thaty{\displaystyle y}is always increasing for increasingx{\displaystyle x}and vice versa, we can see that∫0axp−1dx{\displaystyle \int _{0}^{a}x^{p-1}\mathrm {d} x}upper bounds the area of the rectangle below the curve (with equality whenb≥ap−1{\displaystyle b\geq a^{p-1}}) and∫0byq−1dy{\displaystyle \int _{0}^{b}y^{q-1}\mathrm {d} y}upper bounds the area of the rectangle above the curve (with equality whenb≤ap−1{\displaystyle b\leq a^{p-1}}). Thus,∫0axp−1dx+∫0byq−1dy≥ab,{\displaystyle \int _{0}^{a}x^{p-1}\mathrm {d} x+\int _{0}^{b}y^{q-1}\mathrm {d} y\geq ab,}with equality whenb=ap−1{\displaystyle b=a^{p-1}}(or equivalently,ap=bq{\displaystyle a^{p}=b^{q}}). Young's inequality follows from evaluating the integrals. (Seebelowfor a generalization.)
A second proof is viaJensen's inequality.
The claim is certainly true ifa=0{\displaystyle a=0}orb=0{\displaystyle b=0}so henceforth assume thata>0{\displaystyle a>0}andb>0.{\displaystyle b>0.}Putt=1/p{\displaystyle t=1/p}and(1−t)=1/q.{\displaystyle (1-t)=1/q.}Because thelogarithmfunction isconcave,ln(tap+(1−t)bq)≥tln(ap)+(1−t)ln(bq)=ln(a)+ln(b)=ln(ab){\displaystyle \ln \left(ta^{p}+(1-t)b^{q}\right)~\geq ~t\ln \left(a^{p}\right)+(1-t)\ln \left(b^{q}\right)=\ln(a)+\ln(b)=\ln(ab)}with the equality holding if and only ifap=bq.{\displaystyle a^{p}=b^{q}.}Young's inequality follows by exponentiating.
Yet another proof is to first prove it withb=1{\displaystyle b=1}an then apply the resulting inequality toabq{\displaystyle {\tfrac {a}{b^{q}}}}. The proof below illustrates also why Hölder conjugate exponent is the only possible parameter that makes Young's inequality hold for all non-negative values. The details follow:
Let0<α<1{\displaystyle 0<\alpha <1}andα+β=1{\displaystyle \alpha +\beta =1}. The inequalityx≤αxp+β,forallx≥0{\displaystyle x~\leq ~\alpha x^{p}+\beta ,\qquad \,for\quad \ all\quad \ x~\geq ~0}holds if and only ifα=1p{\displaystyle \alpha ={\tfrac {1}{p}}}(and henceβ=1q{\displaystyle \beta ={\tfrac {1}{q}}}). This can be shown by convexity arguments or by simply minimizing the single-variable function.
To prove full Young's inequality, clearly we assume thata>0{\displaystyle a>0}andb>0{\displaystyle b>0}. Now, we apply the inequality above tox=abs{\displaystyle x={\tfrac {a}{b^{s}}}}to obtain:abs≤1papbsp+1q.{\displaystyle {\tfrac {a}{b^{s}}}~\leq ~{\tfrac {1}{p}}{\tfrac {a^{p}}{b^{sp}}}+{\tfrac {1}{q}}.}It is easy to see that choosings=q−1{\displaystyle s=q-1}and multiplying both sides bybq{\displaystyle b^{q}}yields Young's inequality.
Young's inequality may equivalently be written asaαbβ≤αa+βb,0≤α,β≤1,α+β=1.{\displaystyle a^{\alpha }b^{\beta }\leq \alpha a+\beta b,\qquad \,0\leq \alpha ,\beta \leq 1,\quad \ \alpha +\beta =1.}
Where this is just the concavity of thelogarithmfunction.
Equality holds if and only ifa=b{\displaystyle a=b}or{α,β}={0,1}.{\displaystyle \{\alpha ,\beta \}=\{0,1\}.}This also follows from the weightedAM-GM inequality.
Theorem[4]—Supposea>0{\displaystyle a>0}andb>0.{\displaystyle b>0.}If1<p<∞{\displaystyle 1<p<\infty }andq{\displaystyle q}are such that1p+1q=1{\displaystyle {\tfrac {1}{p}}+{\tfrac {1}{q}}=1}thenab=min0<t<∞(tpapp+t−qbqq).{\displaystyle ab~=~\min _{0<t<\infty }\left({\frac {t^{p}a^{p}}{p}}+{\frac {t^{-q}b^{q}}{q}}\right).}
Usingt:=1{\displaystyle t:=1}and replacinga{\displaystyle a}witha1/p{\displaystyle a^{1/p}}andb{\displaystyle b}withb1/q{\displaystyle b^{1/q}}results in the inequality:a1/pb1/q≤ap+bq,{\displaystyle a^{1/p}\,b^{1/q}~\leq ~{\frac {a}{p}}+{\frac {b}{q}},}which is useful for provingHölder's inequality.
Define a real-valued functionf{\displaystyle f}on the positive real numbers byf(t)=tpapp+t−qbqq{\displaystyle f(t)~=~{\frac {t^{p}a^{p}}{p}}+{\frac {t^{-q}b^{q}}{q}}}for everyt>0{\displaystyle t>0}and then calculate its minimum.
Theorem—If0≤pi≤1{\displaystyle 0\leq p_{i}\leq 1}with∑ipi=1{\displaystyle \sum _{i}p_{i}=1}then∏iaipi≤∑ipiai.{\displaystyle \prod _{i}{a_{i}}^{p_{i}}~\leq ~\sum _{i}p_{i}a_{i}.}Equality holds if and only if all theai{\displaystyle a_{i}}s with non-zeropi{\displaystyle p_{i}}s are equal.
An elementary case of Young's inequality is the inequality withexponent2,{\displaystyle 2,}ab≤a22+b22,{\displaystyle ab\leq {\frac {a^{2}}{2}}+{\frac {b^{2}}{2}},}which also gives rise to the so-called Young's inequality withε{\displaystyle \varepsilon }(valid for everyε>0{\displaystyle \varepsilon >0}), sometimes called the Peter–Paul inequality.[5]This name refers to the fact that tighter control of the second term is achieved at the cost of losing some control of the first term – one must "rob Peter to pay Paul"ab≤a22ε+εb22.{\displaystyle ab~\leq ~{\frac {a^{2}}{2\varepsilon }}+{\frac {\varepsilon b^{2}}{2}}.}
Proof: Young's inequality with exponent2{\displaystyle 2}is the special casep=q=2.{\displaystyle p=q=2.}However, it has a more elementary proof.
Start by observing that the square of every real number is zero or positive. Therefore, for every pair of real numbersa{\displaystyle a}andb{\displaystyle b}we can write:0≤(a−b)2{\displaystyle 0\leq (a-b)^{2}}Work out the square of the right hand side:0≤a2−2ab+b2{\displaystyle 0\leq a^{2}-2ab+b^{2}}Add2ab{\displaystyle 2ab}to both sides:2ab≤a2+b2{\displaystyle 2ab\leq a^{2}+b^{2}}Divide both sides by 2 and we have Young's inequality with exponent2:{\displaystyle 2:}ab≤a22+b22{\displaystyle ab\leq {\frac {a^{2}}{2}}+{\frac {b^{2}}{2}}}
Young's inequality withε{\displaystyle \varepsilon }follows by substitutinga′{\displaystyle a'}andb′{\displaystyle b'}as below into Young's inequality with exponent2:{\displaystyle 2:}a′=a/ε,b′=εb.{\displaystyle a'=a/{\sqrt {\varepsilon }},\;b'={\sqrt {\varepsilon }}b.}
T. Ando proved a generalization of Young's inequality for complex matrices ordered
byLoewner ordering.[6]It states that for any pairA,B{\displaystyle A,B}of complex matrices of ordern{\displaystyle n}there exists aunitary matrixU{\displaystyle U}such thatU∗|AB∗|U⪯1p|A|p+1q|B|q,{\displaystyle U^{*}|AB^{*}|U\preceq {\tfrac {1}{p}}|A|^{p}+{\tfrac {1}{q}}|B|^{q},}where∗{\displaystyle {}^{*}}denotes theconjugate transposeof the matrix and|A|=A∗A.{\displaystyle |A|={\sqrt {A^{*}A}}.}
For the standard version[7][8]of the inequality,
letf{\displaystyle f}denote a real-valued, continuous and strictly increasing function on[0,c]{\displaystyle [0,c]}withc>0{\displaystyle c>0}andf(0)=0.{\displaystyle f(0)=0.}Letf−1{\displaystyle f^{-1}}denote theinverse functionoff.{\displaystyle f.}Then, for alla∈[0,c]{\displaystyle a\in [0,c]}andb∈[0,f(c)],{\displaystyle b\in [0,f(c)],}ab≤∫0af(x)dx+∫0bf−1(x)dx{\displaystyle ab~\leq ~\int _{0}^{a}f(x)\,dx+\int _{0}^{b}f^{-1}(x)\,dx}with equality if and only ifb=f(a).{\displaystyle b=f(a).}
Withf(x)=xp−1{\displaystyle f(x)=x^{p-1}}andf−1(y)=yq−1,{\displaystyle f^{-1}(y)=y^{q-1},}this reduces to standard version for conjugate Hölder exponents.
For details and generalizations we refer to the paper of Mitroi & Niculescu.[9]
By denoting theconvex conjugateof a real functionf{\displaystyle f}byg,{\displaystyle g,}we obtainab≤f(a)+g(b).{\displaystyle ab~\leq ~f(a)+g(b).}This follows immediately from the definition of the convex conjugate. For a convex functionf{\displaystyle f}this also follows from theLegendre transformation.
More generally, iff{\displaystyle f}is defined on a real vector spaceX{\displaystyle X}and itsconvex conjugateis denoted byf⋆{\displaystyle f^{\star }}(and is defined on thedual spaceX⋆{\displaystyle X^{\star }}), then⟨u,v⟩≤f⋆(u)+f(v).{\displaystyle \langle u,v\rangle \leq f^{\star }(u)+f(v).}where⟨⋅,⋅⟩:X⋆×X→R{\displaystyle \langle \cdot ,\cdot \rangle :X^{\star }\times X\to \mathbb {R} }is thedual pairing.
The convex conjugate off(a)=ap/p{\displaystyle f(a)=a^{p}/p}isg(b)=bq/q{\displaystyle g(b)=b^{q}/q}withq{\displaystyle q}such that1p+1q=1,{\displaystyle {\tfrac {1}{p}}+{\tfrac {1}{q}}=1,}and thus Young's inequality for conjugate Hölder exponents mentioned above is a special case.
The Legendre transform off(a)=ea−1{\displaystyle f(a)=e^{a}-1}isg(b)=1−b+blnb{\displaystyle g(b)=1-b+b\ln b}, henceab≤ea−b+blnb{\displaystyle ab\leq e^{a}-b+b\ln b}for all non-negativea{\displaystyle a}andb.{\displaystyle b.}This estimate is useful inlarge deviations theoryunder exponential moment conditions, becauseblnb{\displaystyle b\ln b}appears in the definition ofrelative entropy, which is therate functioninSanov's theorem.
|
https://en.wikipedia.org/wiki/Young%27s_inequality_for_products
|
Inmathematics, areal-valued functionis calledconvexif theline segmentbetween any two distinct points on thegraph of the functionlies above or on the graph between the two points. Equivalently, a function is convex if itsepigraph(the set of points on or above the graph of the function) is aconvex set.
In simple terms, a convex function graph is shaped like a cup∪{\displaystyle \cup }(or a straight line like a linear function), while aconcave function's graph is shaped like a cap∩{\displaystyle \cap }.
A twice-differentiablefunction of a single variable is convexif and only ifitssecond derivativeis nonnegative on its entiredomain.[1]Well-known examples of convex functions of a single variable include alinear functionf(x)=cx{\displaystyle f(x)=cx}(wherec{\displaystyle c}is areal number), aquadratic functioncx2{\displaystyle cx^{2}}(c{\displaystyle c}as a nonnegative real number) and anexponential functioncex{\displaystyle ce^{x}}(c{\displaystyle c}as a nonnegative real number).
Convex functions play an important role in many areas of mathematics. They are especially important in the study ofoptimizationproblems where they are distinguished by a number of convenient properties. For instance, a strictly convex function on anopen sethas no more than oneminimum. Even in infinite-dimensional spaces, under suitable additional hypotheses, convex functions continue to satisfy such properties and as a result, they are the most well-understood functionals in thecalculus of variations. Inprobability theory, a convex function applied to theexpected valueof arandom variableis always bounded above by the expected value of the convex function of the random variable. This result, known asJensen's inequality, can be used to deduceinequalitiessuch as thearithmetic–geometric mean inequalityandHölder's inequality.
LetX{\displaystyle X}be aconvex subsetof a realvector spaceand letf:X→R{\displaystyle f:X\to \mathbb {R} }be a function.
Thenf{\displaystyle f}is calledconvexif and only if any of the following equivalent conditions hold:
The second statement characterizing convex functions that are valued in the real lineR{\displaystyle \mathbb {R} }is also the statement used to defineconvex functionsthat are valued in theextended real number line[−∞,∞]=R∪{±∞},{\displaystyle [-\infty ,\infty ]=\mathbb {R} \cup \{\pm \infty \},}where such a functionf{\displaystyle f}is allowed to take±∞{\displaystyle \pm \infty }as a value. The first statement is not used because it permitst{\displaystyle t}to take0{\displaystyle 0}or1{\displaystyle 1}as a value, in which case, iff(x1)=±∞{\displaystyle f\left(x_{1}\right)=\pm \infty }orf(x2)=±∞,{\displaystyle f\left(x_{2}\right)=\pm \infty ,}respectively, thentf(x1)+(1−t)f(x2){\displaystyle tf\left(x_{1}\right)+(1-t)f\left(x_{2}\right)}would be undefined (because the multiplications0⋅∞{\displaystyle 0\cdot \infty }and0⋅(−∞){\displaystyle 0\cdot (-\infty )}are undefined). The sum−∞+∞{\displaystyle -\infty +\infty }is also undefined so a convex extended real-valued function is typically only allowed to take exactly one of−∞{\displaystyle -\infty }and+∞{\displaystyle +\infty }as a value.
The second statement can also be modified to get the definition ofstrict convexity, where the latter is obtained by replacing≤{\displaystyle \,\leq \,}with the strict inequality<.{\displaystyle \,<.}Explicitly, the mapf{\displaystyle f}is calledstrictly convexif and only if for all real0<t<1{\displaystyle 0<t<1}and allx1,x2∈X{\displaystyle x_{1},x_{2}\in X}such thatx1≠x2{\displaystyle x_{1}\neq x_{2}}:f(tx1+(1−t)x2)<tf(x1)+(1−t)f(x2){\displaystyle f\left(tx_{1}+(1-t)x_{2}\right)<tf\left(x_{1}\right)+(1-t)f\left(x_{2}\right)}
A strictly convex functionf{\displaystyle f}is a function that the straight line between any pair of points on the curvef{\displaystyle f}is above the curvef{\displaystyle f}except for the intersection points between the straight line and the curve. An example of a function which is convex but not strictly convex isf(x,y)=x2+y{\displaystyle f(x,y)=x^{2}+y}. This function is not strictly convex because any two points sharing an x coordinate will have a straight line between them, while any two points NOT sharing an x coordinate will have a greater value of the function than the points between them.
The functionf{\displaystyle f}is said to beconcave(resp.strictly concave) if−f{\displaystyle -f}(f{\displaystyle f}multiplied by −1) is convex (resp. strictly convex).
The termconvexis often referred to asconvex downorconcave upward, and the termconcaveis often referred asconcave downorconvex upward.[3][4][5]If the term "convex" is used without an "up" or "down" keyword, then it refers strictly to a cup shaped graph∪{\displaystyle \cup }. As an example,Jensen's inequalityrefers to an inequality involving a convex or convex-(down), function.[6]
Many properties of convex functions have the same simple formulation for functions of many variables as for functions of one variable. See below the properties for the case of many variables, as some of them are not listed for functions of one variable.
Sincef{\displaystyle f}is convex, by using one of the convex function definitions above and lettingx2=0,{\displaystyle x_{2}=0,}it follows that for all real0≤t≤1,{\displaystyle 0\leq t\leq 1,}f(tx1)=f(tx1+(1−t)⋅0)≤tf(x1)+(1−t)f(0)≤tf(x1).{\displaystyle {\begin{aligned}f(tx_{1})&=f(tx_{1}+(1-t)\cdot 0)\\&\leq tf(x_{1})+(1-t)f(0)\\&\leq tf(x_{1}).\\\end{aligned}}}Fromf(tx1)≤tf(x1){\displaystyle f(tx_{1})\leq tf(x_{1})}, it follows thatf(a)+f(b)=f((a+b)aa+b)+f((a+b)ba+b)≤aa+bf(a+b)+ba+bf(a+b)=f(a+b).{\displaystyle {\begin{aligned}f(a)+f(b)&=f\left((a+b){\frac {a}{a+b}}\right)+f\left((a+b){\frac {b}{a+b}}\right)\\&\leq {\frac {a}{a+b}}f(a+b)+{\frac {b}{a+b}}f(a+b)\\&=f(a+b).\\\end{aligned}}}Namely,f(a)+f(b)≤f(a+b){\displaystyle f(a)+f(b)\leq f(a+b)}.
The concept of strong convexity extends and parametrizes the notion of strict convexity. Intuitively, a strongly-convex function is a function that grows as fast as a quadratic function.[11]A strongly convex function is also strictly convex, but not vice versa. If a one-dimensional functionf{\displaystyle f}is twice continuously differentiable and the domain is the real line, then we can characterize it as follows:
For example, letf{\displaystyle f}be strictly convex, and suppose there is a sequence of points(xn){\displaystyle (x_{n})}such thatf″(xn)=1n{\displaystyle f''(x_{n})={\tfrac {1}{n}}}. Even thoughf″(xn)>0{\displaystyle f''(x_{n})>0}, the function is not strongly convex becausef″(x){\displaystyle f''(x)}will become arbitrarily small.
More generally, a differentiable functionf{\displaystyle f}is called strongly convex with parameterm>0{\displaystyle m>0}if the following inequality holds for all pointsx,y{\displaystyle x,y}in its domain:[12](∇f(x)−∇f(y))T(x−y)≥m‖x−y‖22{\displaystyle (\nabla f(x)-\nabla f(y))^{T}(x-y)\geq m\|x-y\|_{2}^{2}}or, more generally,⟨∇f(x)−∇f(y),x−y⟩≥m‖x−y‖2{\displaystyle \langle \nabla f(x)-\nabla f(y),x-y\rangle \geq m\|x-y\|^{2}}where⟨⋅,⋅⟩{\displaystyle \langle \cdot ,\cdot \rangle }is anyinner product, and‖⋅‖{\displaystyle \|\cdot \|}is the correspondingnorm. Some authors, such as[13]refer to functions satisfying this inequality asellipticfunctions.
An equivalent condition is the following:[14]f(y)≥f(x)+∇f(x)T(y−x)+m2‖y−x‖22{\displaystyle f(y)\geq f(x)+\nabla f(x)^{T}(y-x)+{\frac {m}{2}}\|y-x\|_{2}^{2}}
It is not necessary for a function to be differentiable in order to be strongly convex. A third definition[14]for a strongly convex function, with parameterm,{\displaystyle m,}is that, for allx,y{\displaystyle x,y}in the domain andt∈[0,1],{\displaystyle t\in [0,1],}f(tx+(1−t)y)≤tf(x)+(1−t)f(y)−12mt(1−t)‖x−y‖22{\displaystyle f(tx+(1-t)y)\leq tf(x)+(1-t)f(y)-{\frac {1}{2}}mt(1-t)\|x-y\|_{2}^{2}}
Notice that this definition approaches the definition for strict convexity asm→0,{\displaystyle m\to 0,}and is identical to the definition of a convex function whenm=0.{\displaystyle m=0.}Despite this, functions exist that are strictly convex but are not strongly convex for anym>0{\displaystyle m>0}(see example below).
If the functionf{\displaystyle f}is twice continuously differentiable, then it is strongly convex with parameterm{\displaystyle m}if and only if∇2f(x)⪰mI{\displaystyle \nabla ^{2}f(x)\succeq mI}for allx{\displaystyle x}in the domain, whereI{\displaystyle I}is the identity and∇2f{\displaystyle \nabla ^{2}f}is theHessian matrix, and the inequality⪰{\displaystyle \succeq }means that∇2f(x)−mI{\displaystyle \nabla ^{2}f(x)-mI}ispositive semi-definite. This is equivalent to requiring that the minimumeigenvalueof∇2f(x){\displaystyle \nabla ^{2}f(x)}be at leastm{\displaystyle m}for allx.{\displaystyle x.}If the domain is just the real line, then∇2f(x){\displaystyle \nabla ^{2}f(x)}is just the second derivativef″(x),{\displaystyle f''(x),}so the condition becomesf″(x)≥m{\displaystyle f''(x)\geq m}. Ifm=0{\displaystyle m=0}then this means the Hessian is positive semidefinite (or if the domain is the real line, it means thatf″(x)≥0{\displaystyle f''(x)\geq 0}), which implies the function is convex, and perhaps strictly convex, but not strongly convex.
Assuming still that the function is twice continuously differentiable, one can show that the lower bound of∇2f(x){\displaystyle \nabla ^{2}f(x)}implies that it is strongly convex. UsingTaylor's Theoremthere existsz∈{tx+(1−t)y:t∈[0,1]}{\displaystyle z\in \{tx+(1-t)y:t\in [0,1]\}}such thatf(y)=f(x)+∇f(x)T(y−x)+12(y−x)T∇2f(z)(y−x){\displaystyle f(y)=f(x)+\nabla f(x)^{T}(y-x)+{\frac {1}{2}}(y-x)^{T}\nabla ^{2}f(z)(y-x)}Then(y−x)T∇2f(z)(y−x)≥m(y−x)T(y−x){\displaystyle (y-x)^{T}\nabla ^{2}f(z)(y-x)\geq m(y-x)^{T}(y-x)}by the assumption about the eigenvalues, and hence we recover the second strong convexity equation above.
A functionf{\displaystyle f}is strongly convex with parametermif and only if the functionx↦f(x)−m2‖x‖2{\displaystyle x\mapsto f(x)-{\frac {m}{2}}\|x\|^{2}}is convex.
A twice continuously differentiable functionf{\displaystyle f}on a compact domainX{\displaystyle X}that satisfiesf″(x)>0{\displaystyle f''(x)>0}for allx∈X{\displaystyle x\in X}is strongly convex. The proof of this statement follows from theextreme value theorem, which states that a continuous function on a compact set has a maximum and minimum.
Strongly convex functions are in general easier to work with than convex or strictly convex functions, since they are a smaller class. Like strictly convex functions, strongly convex functions have unique minima on compact sets.
Iffis a strongly-convex function with parameterm, then:[15]: Prop.6.1.4
A uniformly convex function,[16][17]with modulusϕ{\displaystyle \phi }, is a functionf{\displaystyle f}that, for allx,y{\displaystyle x,y}in the domain andt∈[0,1],{\displaystyle t\in [0,1],}satisfiesf(tx+(1−t)y)≤tf(x)+(1−t)f(y)−t(1−t)ϕ(‖x−y‖){\displaystyle f(tx+(1-t)y)\leq tf(x)+(1-t)f(y)-t(1-t)\phi (\|x-y\|)}whereϕ{\displaystyle \phi }is a function that is non-negative and vanishes only at 0. This is a generalization of the concept of strongly convex function; by takingϕ(α)=m2α2{\displaystyle \phi (\alpha )={\tfrac {m}{2}}\alpha ^{2}}we recover the definition of strong convexity.
It is worth noting that some authors require the modulusϕ{\displaystyle \phi }to be an increasing function,[17]but this condition is not required by all authors.[16]
|
https://en.wikipedia.org/wiki/Convex_surface
|
Inmathematics,Farkas' lemmais a solvability theorem for a finitesystemoflinear inequalities. It was originally proven by the Hungarian mathematicianGyula Farkas.[1]Farkas'lemmais the key result underpinning thelinear programmingduality and has played a central role in the development ofmathematical optimization(alternatively,mathematical programming). It is used amongst other things in the proof of theKarush–Kuhn–Tucker theoreminnonlinear programming.[2]Remarkably, in the area of the foundations of quantum theory, the lemma also underlies the complete set ofBell inequalitiesin the form of necessary and sufficient conditions for the existence of alocal hidden-variable theory, given data from any specific set of measurements.[3]
Generalizations of the Farkas' lemma are about the solvability theorem for convex inequalities,[4]i.e., infinite system of linear inequalities. Farkas' lemma belongs to a class of statements called "theorems of the alternative": a theorem stating that exactly one of two systems has a solution.[5]
There are a number of slightly different (but equivalent) formulations of the lemma in the literature. The one given here is due to Gale, Kuhn and Tucker (1951).[6]
Farkas' lemma—LetA∈Rm×n{\displaystyle \mathbf {A} \in \mathbb {R} ^{m\times n}}andb∈Rm.{\displaystyle \mathbf {b} \in \mathbb {R} ^{m}.}Then exactly one of the following two assertions is true:
Here, the notationx≥0{\displaystyle \mathbf {x} \geq 0}means that all components of the vectorx{\displaystyle \mathbf {x} }are nonnegative.
Letm,n= 2,A=[6430],{\displaystyle \mathbf {A} ={\begin{bmatrix}6&4\\3&0\end{bmatrix}},}andb=[b1b2].{\displaystyle \mathbf {b} ={\begin{bmatrix}b_{1}\\b_{2}\end{bmatrix}}.}The lemma says that exactly one of the following two statements must be true (depending onb1andb2):
Here is a proof of the lemma in this special case:
Consider theclosedconvex coneC(A){\displaystyle C(\mathbf {A} )}spanned by the columns ofA; that is,
Observe thatC(A){\displaystyle C(\mathbf {A} )}is the set of the vectorsbfor which the first assertion in the statement of Farkas' lemma holds. On the other hand, the vectoryin the second assertion is orthogonal to ahyperplanethat separatesbandC(A).{\displaystyle C(\mathbf {A} ).}The lemma follows from the observation thatbbelongs toC(A){\displaystyle C(\mathbf {A} )}if and only ifthere is no hyperplane that separates it fromC(A).{\displaystyle C(\mathbf {A} ).}
More precisely, leta1,…,an∈Rm{\displaystyle \mathbf {a} _{1},\dots ,\mathbf {a} _{n}\in \mathbb {R} ^{m}}denote the columns ofA. In terms of these vectors, Farkas' lemma states that exactly one of the following two statements is true:
The sumsx1a1+⋯+xnan{\displaystyle x_{1}\mathbf {a} _{1}+\dots +x_{n}\mathbf {a} _{n}}with nonnegative coefficientsx1,…,xn{\displaystyle x_{1},\dots ,x_{n}}form the cone spanned by the columns ofA. Therefore, the first statement tells thatbbelongs toC(A).{\displaystyle C(\mathbf {A} ).}
The second statement tells that there exists a vectorysuch that the angle ofywith the vectorsaiis at most 90°, while the angle ofywith the vectorbis more than 90°. The hyperplane normal to this vector has the vectorsaion one side and the vectorbon the other side. Hence, this hyperplane separates the cone spanned bya1,…,an{\displaystyle \mathbf {a} _{1},\dots ,\mathbf {a} _{n}}from the vectorb.
For example, letn,m= 2,a1= (1, 0)T, anda2= (1, 1)T. The convex cone spanned bya1anda2can be seen as a wedge-shaped slice of the first quadrant in thexyplane. Now, supposeb= (0, 1). Certainly,bis not in the convex conea1x1+a2x2. Hence, there must be a separating hyperplane. Lety= (1, −1)T. We can see thata1·y= 1,a2·y= 0, andb·y= −1. Hence, the hyperplane with normalyindeed separates the convex conea1x1+a2x2fromb.
A particularly suggestive and easy-to-remember version is the following: if a set of linear inequalities has no solution, then a contradiction can be produced from it by linear combination with nonnegative coefficients. In formulas: ifAx≤b{\displaystyle \mathbf {Ax} \leq \mathbf {b} }is unsolvable theny⊤A=0,{\displaystyle \mathbf {y} ^{\top }\mathbf {A} =0,}y⊤b=−1,{\displaystyle \mathbf {y} ^{\top }\mathbf {b} =-1,}y≥0{\displaystyle \mathbf {y} \geq 0}has a solution.[7]Note thaty⊤A{\displaystyle \mathbf {y} ^{\top }\mathbf {A} }is a combination of the left-hand sides,y⊤b{\displaystyle \mathbf {y} ^{\top }\mathbf {b} }a combination of the right-hand side of the inequalities. Since the positive combination produces a zero vector on the left and a −1 on the right, the contradiction is apparent.
Thus, Farkas' lemma can be viewed as a theorem oflogical completeness:Ax≤b{\displaystyle \mathbf {Ax} \leq \mathbf {b} }is a set of "axioms", the linear combinations are the "derivation rules", and the lemma says that, if the set of axioms is inconsistent, then it can be refuted using the derivation rules.[8]: 92–94
Farkas' lemma implies that thedecision problem"Given asystem of linear equations, does it have a non-negative solution?" is in the intersection ofNPandco-NP. This is because, according to the lemma, both a "yes" answer and a "no" answer have a proof that can be verified in polynomial time. The problems in the intersectionNP∩coNP{\displaystyle NP\cap coNP}are also calledwell-characterized problems. It is a long-standing open question whetherNP∩coNP{\displaystyle NP\cap coNP}is equal toP. In particular, the question of whether a system of linear equations has a non-negative solution was not known to be in P, until it was proved using theellipsoid method.[9]: 25
The Farkas Lemma has several variants with different sign constraints (the first one is the original version):[8]: 92
The latter variant is mentioned for completeness; it is not actually a "Farkas lemma" since it contains only equalities. Its proof is anexercise in linear algebra.
There are also Farkas-like lemmas forintegerprograms.[9]: 12--14For systems of equations, the lemma is simple:
For system of inequalities, the lemma is much more complicated. It is based on the following tworules of inference:
The lemma says that:
The variants are summarized in the table below.
Generalized Farkas' lemma—LetA∈Rm×n,{\displaystyle \mathbf {A} \in \mathbb {R} ^{m\times n},}b∈Rm,{\displaystyle \mathbf {b} \in \mathbb {R} ^{m},}S{\displaystyle \mathbf {S} }is a closed convex cone inRn,{\displaystyle \mathbb {R} ^{n},}and thedual coneofS{\displaystyle \mathbf {S} }isS∗={z∈Rn∣z⊤x≥0,∀x∈S}.{\displaystyle \mathbf {S} ^{*}=\{\mathbf {z} \in \mathbb {R} ^{n}\mid \mathbf {z} ^{\top }\mathbf {x} \geq 0,\forall \mathbf {x} \in \mathbf {S} \}.}If convex coneC(A)={Ax∣x∈S}{\displaystyle C(\mathbf {A} )=\{\mathbf {A} \mathbf {x} \mid \mathbf {x} \in \mathbf {S} \}}is closed, then exactly one of the following two statements is true:
Generalized Farkas' lemma can be interpreted geometrically as follows: either a vector is in a given closedconvex cone, or there exists ahyperplaneseparating the vector from the cone; there are no other possibilities. The closedness condition is necessary, seeSeparation theorem IinHyperplane separation theorem. For original Farkas' lemma,S{\displaystyle \mathbf {S} }is the nonnegative orthantR+n,{\displaystyle \mathbb {R} _{+}^{n},}hence the closedness condition holds automatically. Indeed, for polyhedral convex cone, i.e., there exists aB∈Rn×k{\displaystyle \mathbf {B} \in \mathbb {R} ^{n\times k}}such thatS={Bx∣x∈R+k},{\displaystyle \mathbf {S} =\{\mathbf {B} \mathbf {x} \mid \mathbf {x} \in \mathbb {R} _{+}^{k}\},}the closedness condition holds automatically. Inconvex optimization, various kinds of constraint qualification, e.g.Slater's condition, are responsible for closedness of the underlying convex coneC(A).{\displaystyle C(\mathbf {A} ).}
By settingS=Rn{\displaystyle \mathbf {S} =\mathbb {R} ^{n}}andS∗={0}{\displaystyle \mathbf {S} ^{*}=\{0\}}in generalized Farkas' lemma, we obtain the following corollary about the solvability for a finite system of linear equalities:
Corollary—LetA∈Rm×n{\displaystyle \mathbf {A} \in \mathbb {R} ^{m\times n}}andb∈Rm.{\displaystyle \mathbf {b} \in \mathbb {R} ^{m}.}Then exactly one of the following two statements is true:
Farkas' lemma can be varied to many further theorems of alternative by simple modifications,[5]such asGordan's theorem: EitherAx<0{\displaystyle \mathbf {Ax} <0}has a solutionx, orA⊤y=0{\displaystyle \mathbf {A} ^{\top }\mathbf {y} =0}has a nonzero solutionywithy≥ 0.
Common applications of Farkas' lemma include proving thestrong duality theorem associated with linear programmingand theKarush–Kuhn–Tucker conditions. An extension of Farkas' lemma can be used to analyze the strong duality conditions for and construct the dual of a semidefinite program. It is sufficient to prove the existence of the Karush–Kuhn–Tucker conditions using theFredholm alternativebut for the condition to be necessary, one must apply von Neumann'sminimax theoremto show the equations derived by Cauchy are not violated.
This is used forDill'sReluplex method for verifying deep neural networks.
|
https://en.wikipedia.org/wiki/Farkas%27_lemma
|
In mathematics, and particularly infunctional analysis,Fichera's existence principleis an existence and uniqueness theorem for solution offunctional equations, proved byGaetano Ficherain 1954.[1]More precisely, given a generalvector spaceVand twolinear mapsfrom itontotwoBanach spaces, the principle states necessary and sufficient conditions for alinear transformationbetween the twodualBanach spaces to be invertible for every vector inV.[2]
|
https://en.wikipedia.org/wiki/Fichera%27s_existence_principle
|
TheM. Riesz extension theoremis atheoreminmathematics, proved byMarcel Riesz[1]during his study of theproblem of moments.[2]
LetE{\displaystyle E}be arealvector space,F⊂E{\displaystyle F\subset E}be avector subspace, andK⊂E{\displaystyle K\subset E}be aconvex cone.
Alinear functionalϕ:F→R{\displaystyle \phi :F\to \mathbb {R} }is calledK{\displaystyle K}-positive, if it takes only non-negative values on the coneK{\displaystyle K}:
A linear functionalψ:E→R{\displaystyle \psi :E\to \mathbb {R} }is called aK{\displaystyle K}-positiveextensionofϕ{\displaystyle \phi }, if it is identical toϕ{\displaystyle \phi }in the domain ofϕ{\displaystyle \phi }, and also returns a value of at least 0 for all points in the coneK{\displaystyle K}:
In general, aK{\displaystyle K}-positive linear functional onF{\displaystyle F}cannot be extended to aK{\displaystyle K}-positive linear functional onE{\displaystyle E}. Already in two dimensions one obtains a counterexample. LetE=R2,K={(x,y):y>0}∪{(x,0):x>0},{\displaystyle E=\mathbb {R} ^{2},\ K=\{(x,y):y>0\}\cup \{(x,0):x>0\},}andF{\displaystyle F}be thex{\displaystyle x}-axis. The positive functionalϕ(x,0)=x{\displaystyle \phi (x,0)=x}can not be extended to a positive functional onE{\displaystyle E}.
However, the extension exists under the additional assumption thatE⊂K+F,{\displaystyle E\subset K+F,}namely for everyy∈E,{\displaystyle y\in E,}there exists anx∈F{\displaystyle x\in F}such thaty−x∈K.{\displaystyle y-x\in K.}
The proof is similar to the proof of theHahn–Banach theorem(see also below).
Bytransfinite inductionorZorn's lemmait is sufficient to consider the case dimE/F=1{\displaystyle E/F=1}.
Choose anyy∈E∖F{\displaystyle y\in E\setminus F}. Set
We will prove below that−∞<a≤b{\displaystyle -\infty <a\leq b}. For now, choose anyc{\displaystyle c}satisfyinga≤c≤b{\displaystyle a\leq c\leq b}, and setψ(y)=c{\displaystyle \psi (y)=c},ψ|F=ϕ{\displaystyle \psi |_{F}=\phi }, and then extendψ{\displaystyle \psi }to all ofE{\displaystyle E}by linearity. We need to show thatψ{\displaystyle \psi }isK{\displaystyle K}-positive. Supposez∈K{\displaystyle z\in K}. Then eitherz=0{\displaystyle z=0}, orz=p(x+y){\displaystyle z=p(x+y)}orz=p(x−y){\displaystyle z=p(x-y)}for somep>0{\displaystyle p>0}andx∈F{\displaystyle x\in F}. Ifz=0{\displaystyle z=0}, thenψ(z)>0{\displaystyle \psi (z)>0}. In the first remaining casex+y=y−(−x)∈K{\displaystyle x+y=y-(-x)\in K}, and so
by definition. Thus
In the second case,x−y∈K{\displaystyle x-y\in K}, and so similarly
by definition and so
In all cases,ψ(z)>0{\displaystyle \psi (z)>0}, and soψ{\displaystyle \psi }isK{\displaystyle K}-positive.
We now prove that−∞<a≤b{\displaystyle -\infty <a\leq b}. Notice by assumption there exists at least onex∈F{\displaystyle x\in F}for whichy−x∈K{\displaystyle y-x\in K}, and so−∞<a{\displaystyle -\infty <a}. However, it may be the case that there are nox∈F{\displaystyle x\in F}for whichx−y∈K{\displaystyle x-y\in K}, in which caseb=∞{\displaystyle b=\infty }and the inequality is trivial (in this case notice that the third case above cannot happen). Therefore, we may assume thatb<∞{\displaystyle b<\infty }and there is at least onex∈F{\displaystyle x\in F}for whichx−y∈K{\displaystyle x-y\in K}. To prove the inequality, it suffices to show that wheneverx∈F{\displaystyle x\in F}andy−x∈K{\displaystyle y-x\in K}, andx′∈F{\displaystyle x'\in F}andx′−y∈K{\displaystyle x'-y\in K}, thenϕ(x)≤ϕ(x′){\displaystyle \phi (x)\leq \phi (x')}. Indeed,
sinceK{\displaystyle K}is a convex cone, and so
sinceϕ{\displaystyle \phi }isK{\displaystyle K}-positive.
LetEbe areallinear space, and letK⊂Ebe aconvex cone. Letx∈E/(−K) be such thatRx+K=E. Then there exists aK-positive linear functionalφ:E→Rsuch thatφ(x) > 0.
The Hahn–Banach theorem can be deduced from the M. Riesz extension theorem.
LetVbe a linear space, and letNbe a sublinear function onV. Letφbe a functional on a subspaceU⊂Vthat is dominated byN:
The Hahn–Banach theorem asserts thatφcan be extended to a linear functional onVthat is dominated byN.
To derive this from the M. Riesz extension theorem, define a convex coneK⊂R×Vby
Define a functionalφ1onR×Uby
One can see thatφ1isK-positive, and thatK+ (R×U) =R×V. Thereforeφ1can be extended to aK-positive functionalψ1onR×V. Then
is the desired extension ofφ. Indeed, ifψ(x) >N(x), we have: (N(x),x) ∈K, whereas
leading to a contradiction.
|
https://en.wikipedia.org/wiki/M._Riesz_extension_theorem
|
Ingeometry, thehyperplane separation theoremis a theorem aboutdisjointconvex setsinn-dimensionalEuclidean space. There are several rather similar versions. In one version of the theorem, if both these sets areclosedand at least one of them iscompact, then there is ahyperplanein between them and even two parallel hyperplanes in between them separated by a gap. In another version, if both disjoint convex sets are open, then there is a hyperplane in between them, but not necessarily any gap. An axis which is orthogonal to a separating hyperplane is aseparating axis, because the orthogonalprojectionsof the convex bodies onto the axis are disjoint.
The hyperplane separation theorem is due toHermann Minkowski. TheHahn–Banach separation theoremgeneralizes the result totopological vector spaces.
A related result is thesupporting hyperplane theorem.
In the context ofsupport-vector machines, theoptimally separating hyperplaneormaximum-margin hyperplaneis ahyperplanewhich separates twoconvex hullsof points and isequidistantfrom the two.[1][2][3]
Hyperplane separation theorem[4]—LetA{\displaystyle A}andB{\displaystyle B}be two disjoint nonempty convex subsets ofRn{\displaystyle \mathbb {R} ^{n}}. Then there exist a nonzero vectorv{\displaystyle v}and a real numberc{\displaystyle c}such that
for allx{\displaystyle x}inA{\displaystyle A}andy{\displaystyle y}inB{\displaystyle B}; i.e., the hyperplane⟨⋅,v⟩=c{\displaystyle \langle \cdot ,v\rangle =c},v{\displaystyle v}the normal vector, separatesA{\displaystyle A}andB{\displaystyle B}.
If both sets are closed, and at least one of them is compact, then the separation can be strict, that is,⟨x,v⟩>c1and⟨y,v⟩<c2{\displaystyle \langle x,v\rangle >c_{1}\,{\text{ and }}\langle y,v\rangle <c_{2}}for somec1>c2{\displaystyle c_{1}>c_{2}}
In all cases, assumeA,B{\displaystyle A,B}to be disjoint, nonempty, and convex subsets ofRn{\displaystyle \mathbb {R} ^{n}}. The summary of the results are as follows:
The number of dimensions must be finite. In infinite-dimensional spaces there are examples of two closed, convex, disjoint sets which cannot be separated by a closed hyperplane (a hyperplane where acontinuouslinear functional equals some constant) even in the weak sense where the inequalities are not strict.[5]
Here, the compactness in the hypothesis cannot be relaxed; see an example in the sectionCounterexamples and uniqueness. This version of the separation theorem does generalize to infinite-dimension; the generalization is more commonly known as theHahn–Banach separation theorem.
The proof is based on the following lemma:
Lemma—LetA{\displaystyle A}andB{\displaystyle B}be two disjoint closed subsets ofRn{\displaystyle \mathbb {R} ^{n}}, and assumeA{\displaystyle A}is compact. Then there exist pointsa0∈A{\displaystyle a_{0}\in A}andb0∈B{\displaystyle b_{0}\in B}minimizing the distance‖a−b‖{\displaystyle \|a-b\|}overa∈A{\displaystyle a\in A}andb∈B{\displaystyle b\in B}.
Leta∈A{\displaystyle a\in A}andb∈B{\displaystyle b\in B}be any pair of points, and letr1=‖b−a‖{\displaystyle r_{1}=\|b-a\|}. SinceA{\displaystyle A}is compact, it is contained in some ball centered ona{\displaystyle a}; let the radius of this ball ber2{\displaystyle r_{2}}. LetS=B∩Br1+r2(a)¯{\displaystyle S=B\cap {\overline {B_{r_{1}+r_{2}}(a)}}}be the intersection ofB{\displaystyle B}with a closed ball of radiusr1+r2{\displaystyle r_{1}+r_{2}}arounda{\displaystyle a}. ThenS{\displaystyle S}is compact and nonempty because it containsb{\displaystyle b}. Since the distance function is continuous, there exist pointsa0{\displaystyle a_{0}}andb0{\displaystyle b_{0}}whose distance‖a0−b0‖{\displaystyle \|a_{0}-b_{0}\|}is the minimum over all pairs of points inA×S{\displaystyle A\times S}. It remains to show thata0{\displaystyle a_{0}}andb0{\displaystyle b_{0}}in fact have the minimum distance over all pairs of points inA×B{\displaystyle A\times B}. Suppose for contradiction that there exist pointsa′{\displaystyle a'}andb′{\displaystyle b'}such that‖a′−b′‖<‖a0−b0‖{\displaystyle \|a'-b'\|<\|a_{0}-b_{0}\|}. Then in particular,‖a′−b′‖<r1{\displaystyle \|a'-b'\|<r_{1}}, and by the triangle inequality,‖a−b′‖≤‖a′−b′‖+‖a−a′‖<r1+r2{\displaystyle \|a-b'\|\leq \|a'-b'\|+\|a-a'\|<r_{1}+r_{2}}. Thereforeb′{\displaystyle b'}is contained inS{\displaystyle S}, which contradicts the fact thata0{\displaystyle a_{0}}andb0{\displaystyle b_{0}}had minimum distance overA×S{\displaystyle A\times S}.◻{\displaystyle \square }
We first prove the second case. (See the diagram.)
WLOG,A{\displaystyle A}is compact. By the lemma, there exist pointsa0∈A{\displaystyle a_{0}\in A}andb0∈B{\displaystyle b_{0}\in B}of minimum distance to each other.
SinceA{\displaystyle A}andB{\displaystyle B}are disjoint, we havea0≠b0{\displaystyle a_{0}\neq b_{0}}. Now, construct two hyperplanesLA,LB{\displaystyle L_{A},L_{B}}perpendicular to line segment[a0,b0]{\displaystyle [a_{0},b_{0}]}, withLA{\displaystyle L_{A}}acrossa0{\displaystyle a_{0}}andLB{\displaystyle L_{B}}acrossb0{\displaystyle b_{0}}. We claim that neitherA{\displaystyle A}norB{\displaystyle B}enters the space betweenLA,LB{\displaystyle L_{A},L_{B}}, and thus the perpendicular hyperplanes to(a0,b0){\displaystyle (a_{0},b_{0})}satisfy the requirement of the theorem.
Algebraically, the hyperplanesLA,LB{\displaystyle L_{A},L_{B}}are defined by the vectorv:=b0−a0{\displaystyle v:=b_{0}-a_{0}}, and two constantscA:=⟨v,a0⟩<cB:=⟨v,b0⟩{\displaystyle c_{A}:=\langle v,a_{0}\rangle <c_{B}:=\langle v,b_{0}\rangle }, such thatLA={x:⟨v,x⟩=cA},LB={x:⟨v,x⟩=cB}{\displaystyle L_{A}=\{x:\langle v,x\rangle =c_{A}\},L_{B}=\{x:\langle v,x\rangle =c_{B}\}}. Our claim is that∀a∈A,⟨v,a⟩≤cA{\displaystyle \forall a\in A,\langle v,a\rangle \leq c_{A}}and∀b∈B,⟨v,b⟩≥cB{\displaystyle \forall b\in B,\langle v,b\rangle \geq c_{B}}.
Suppose there is somea∈A{\displaystyle a\in A}such that⟨v,a⟩>cA{\displaystyle \langle v,a\rangle >c_{A}}, then leta′{\displaystyle a'}be the foot of perpendicular fromb0{\displaystyle b_{0}}to the line segment[a0,a]{\displaystyle [a_{0},a]}. SinceA{\displaystyle A}is convex,a′{\displaystyle a'}is insideA{\displaystyle A}, and by planar geometry,a′{\displaystyle a'}is closer tob0{\displaystyle b_{0}}thana0{\displaystyle a_{0}}, contradiction. Similar argument applies toB{\displaystyle B}.
Now for the first case.
Approach bothA,B{\displaystyle A,B}from the inside byA1⊆A2⊆⋯⊆A{\displaystyle A_{1}\subseteq A_{2}\subseteq \cdots \subseteq A}andB1⊆B2⊆⋯⊆B{\displaystyle B_{1}\subseteq B_{2}\subseteq \cdots \subseteq B}, such that eachAk,Bk{\displaystyle A_{k},B_{k}}is closed and compact, and the unions are the relative interiorsrelint(A),relint(B){\displaystyle \mathrm {relint} (A),\mathrm {relint} (B)}. (Seerelative interiorpage for details.)
Now by the second case, for each pairAk,Bk{\displaystyle A_{k},B_{k}}there exists some unit vectorvk{\displaystyle v_{k}}and real numberck{\displaystyle c_{k}}, such that⟨vk,Ak⟩<ck<⟨vk,Bk⟩{\displaystyle \langle v_{k},A_{k}\rangle <c_{k}<\langle v_{k},B_{k}\rangle }.
Since the unit sphere is compact, we can take a convergent subsequence, so thatvk→v{\displaystyle v_{k}\to v}. LetcA:=supa∈A⟨v,a⟩,cB:=infb∈B⟨v,b⟩{\displaystyle c_{A}:=\sup _{a\in A}\langle v,a\rangle ,c_{B}:=\inf _{b\in B}\langle v,b\rangle }. We claim thatcA≤cB{\displaystyle c_{A}\leq c_{B}}, thus separatingA,B{\displaystyle A,B}.
Assume not, then there exists somea∈A,b∈B{\displaystyle a\in A,b\in B}such that⟨v,a⟩>⟨v,b⟩{\displaystyle \langle v,a\rangle >\langle v,b\rangle }, then sincevk→v{\displaystyle v_{k}\to v}, for large enoughk{\displaystyle k}, we have⟨vk,a⟩>⟨vk,b⟩{\displaystyle \langle v_{k},a\rangle >\langle v_{k},b\rangle }, contradiction.
Since a separating hyperplane cannot intersect the interiors of open convex sets, we have a corollary:
Separation theorem I—LetA{\displaystyle A}andB{\displaystyle B}be two disjoint nonempty convex sets. IfA{\displaystyle A}is open, then there exist a nonzero vectorv{\displaystyle v}and real numberc{\displaystyle c}such that
for allx{\displaystyle x}inA{\displaystyle A}andy{\displaystyle y}inB{\displaystyle B}. If both sets are open, then there exist a nonzero vectorv{\displaystyle v}and real numberc{\displaystyle c}such that
for allx{\displaystyle x}inA{\displaystyle A}andy{\displaystyle y}inB{\displaystyle B}.
If the setsA,B{\displaystyle A,B}have possible intersections, but theirrelative interiorsare disjoint, then the proof of the first case still applies with no change, thus yielding:
Separation theorem II—LetA{\displaystyle A}andB{\displaystyle B}be two nonempty convex subsets ofRn{\displaystyle \mathbb {R} ^{n}}with disjoint relative interiors. Then there exist a nonzero vectorv{\displaystyle v}and a real numberc{\displaystyle c}such that
in particular, we have thesupporting hyperplane theorem.
Supporting hyperplane theorem—ifA{\displaystyle A}is a convex set inRn,{\displaystyle \mathbb {R} ^{n},}anda0{\displaystyle a_{0}}is a point on theboundaryofA{\displaystyle A}, then there exists a supporting hyperplane ofA{\displaystyle A}containinga0{\displaystyle a_{0}}.
If the affine span ofA{\displaystyle A}is not all ofRn{\displaystyle \mathbb {R} ^{n}}, then extend the affine span to a supporting hyperplane. Else,relint(A)=int(A){\displaystyle \mathrm {relint} (A)=\mathrm {int} (A)}is disjoint fromrelint({a0})={a0}{\displaystyle \mathrm {relint} (\{a_{0}\})=\{a_{0}\}}, so apply the above theorem.
Note that the existence of a hyperplane that only "separates" two convex sets in the weak sense of both inequalities being non-strict obviously does not imply that the two sets are disjoint. Both sets could have points located on the hyperplane.
If one ofAorBis not convex, then there are many possible counterexamples. For example,AandBcould be concentric circles. A more subtle counterexample is one in whichAandBare both closed but neither one is compact. For example, ifAis a closed half plane and B is bounded by one arm of a hyperbola, then there is no strictly separating hyperplane:
(Although, by an instance of the second theorem, there is a hyperplane that separates their interiors.) Another type of counterexample hasAcompact andBopen. For example, A can be a closed square and B can be an open square that touchesA.
In the first version of the theorem, evidently the separating hyperplane is never unique. In the second version, it may or may not be unique. Technically a separating axis is never unique because it can be translated; in the second version of the theorem, a separating axis can be unique up to translation.
Thehorn angleprovides a good counterexample to many hyperplane separations. For example, inR2{\displaystyle \mathbb {R} ^{2}}, the unit disk is disjoint from the open interval((1,0),(1,1)){\displaystyle ((1,0),(1,1))}, but the only line separating them contains the entirety of((1,0),(1,1)){\displaystyle ((1,0),(1,1))}. This shows that ifA{\displaystyle A}is closed andB{\displaystyle B}isrelativelyopen, then there does not necessarily exist a separation that is strict forB{\displaystyle B}. However, ifA{\displaystyle A}is closedpolytopethen such a separation exists.[6]
Farkas' lemmaand related results can be understood as hyperplane separation theorems when the convex bodies are defined by finitely many linear inequalities.
More results may be found.[6]
In collision detection, the hyperplane separation theorem is usually used in the following form:
Separating axis theorem—Two closed convex objects are disjoint if there exists a line ("separating axis") onto which the two objects' projections are disjoint.
Regardless of dimensionality, the separating axis is always a line.
For example, in 3D, the space is separated by planes, but the separating axis is perpendicular to the separating plane.
The separating axis theorem can be applied for fastcollision detectionbetween polygon meshes. Eachface'snormalor other feature direction is used as a separating axis. Note that this yields possible separating axes, not separating lines/planes.
In 3D, using face normals alone will fail to separate some edge-on-edge non-colliding cases. Additional axes, consisting of the cross-products of pairs of edges, one taken from each object, are required.[7]
For increased efficiency, parallel axes may be calculated as a single axis.
|
https://en.wikipedia.org/wiki/Separating_axis_theorem
|
In mathematics, specifically infunctional analysisandHilbert spacetheory,vector-valued Hahn–Banach theoremsare generalizations of theHahn–Banach theoremsfrom linear functionals (which are always valued in thereal numbersR{\displaystyle \mathbb {R} }or thecomplex numbersC{\displaystyle \mathbb {C} }) to linear operators valued intopological vector spaces(TVSs).
ThroughoutXandYwill betopological vector spaces(TVSs) over the fieldK{\displaystyle \mathbb {K} }andL(X;Y)will denote the vector space of all continuous linear maps fromXtoY, where ifXandYare normed spaces then we endowL(X;Y)with its canonicaloperator norm.
IfMis a vector subspace of a TVSXthenYhasthe extension property fromMtoXif every continuous linear mapf:M→Yhas a continuouslinear extensionto all ofX. IfXandYarenormed spaces, then we say thatYhasthe metric extension property fromMtoXif this continuous linear extension can be chosen to have norm equal to‖f‖.
A TVSYhasthe extension property from all subspaces ofX(toX) if for every vector subspaceMofX,Yhas the extension property fromMtoX. IfXandYarenormed spacesthenYhasthe metric extension property from all subspace ofX(toX) if for every vector subspaceMofX,Yhas the metric extension property fromMtoX.
A TVSYhasthe extension property[1]if for every locally convex spaceXand every vector subspaceMofX,Yhas the extension property fromMtoX.
ABanach spaceYhasthe metric extension property[1]if for every Banach spaceXand every vector subspaceMofX,Yhas the metric extension property fromMtoX.
1-extensions
IfMis a vector subspace of normed spaceXover the fieldK{\displaystyle \mathbb {K} }then a normed spaceYhasthe immediate 1-extension property fromMtoXif for everyx∉M, every continuous linear mapf:M→Yhas a continuous linear extensionF:M⊕(Kx)→Y{\displaystyle F:M\oplus (\mathbb {K} x)\to Y}such that‖f‖ = ‖F‖. We say thatYhasthe immediate 1-extension propertyifYhas the immediate 1-extension property fromMtoXfor every Banach spaceXand every vector subspaceMofX.
Alocally convex topological vector spaceYisinjective[1]if for every locally convex spaceZcontainingYas a topological vector subspace, there exists a continuousprojectionfromZontoY.
ABanach spaceYis1-injective[1]or aP1-spaceif for every Banach spaceZcontainingYas a normed vector subspace (i.e. the norm ofYis identical to the usual restriction toYofZ's norm), there exists a continuousprojectionfromZontoYhaving norm 1.
In order for a TVSYto have the extension property, it must becomplete(since it must be possible to extend theidentity map1:Y→Y{\displaystyle \mathbf {1} :Y\to Y}fromYto the completionZofY; that is, to the mapZ→Y).[1]
Iff:M→Yis a continuous linear map from a vector subspaceMofXinto a complete Hausdorff spaceYthen there always exists a unique continuous linear extension offfromMto the closure ofMinX.[1][2]Consequently, it suffices to only consider maps from closed vector subspaces into complete Hausdorff spaces.[1]
Any locally convex space having the extension property is injective.[1]IfYis an injective Banach space, then for every Banach spaceX, every continuous linear operator from a vector subspace ofXintoYhas a continuous linear extension to all ofX.[1]
In 1953,Alexander Grothendieckshowed that any Banach space with the extension property is either finite-dimensional or elsenotseparable.[1]
Theorem[1]—Suppose thatYis a Banach space over the fieldK.{\displaystyle \mathbb {K} .}Then the following are equivalent:
where if in addition,Yis a vector space over the real numbers then we may add to this list:
Theorem[1]—Suppose thatYis arealBanach space with the metric extension property.
Then the following are equivalent:
Products of the underlying field
Suppose thatX{\displaystyle X}is a vector space overK{\displaystyle \mathbb {K} }, whereK{\displaystyle \mathbb {K} }is eitherR{\displaystyle \mathbb {R} }orC{\displaystyle \mathbb {C} }and letT{\displaystyle T}be any set.
LetY:=KT,{\displaystyle Y:=\mathbb {K} ^{T},}which is the product ofK{\displaystyle \mathbb {K} }taken|T|{\displaystyle |T|}times, or equivalently, the set of allK{\displaystyle \mathbb {K} }-valued functions onT.
GiveY{\displaystyle Y}its usualproduct topology, which makes it into aHausdorfflocally convexTVS.
ThenY{\displaystyle Y}has the extension property.[1]
For any setT,{\displaystyle T,}theLp spaceℓ∞(T){\displaystyle \ell ^{\infty }(T)}has both the extension property and the metric extension property.
|
https://en.wikipedia.org/wiki/Vector-valued_Hahn%E2%80%93Banach_theorems
|
Inconvex analysis,Popoviciu's inequalityis aninequalityaboutconvex functions. It is similar toJensen's inequalityand was found in 1965 byTiberiu Popoviciu,[1][2]a Romanian mathematician.
Letfbe a function from an intervalI⊆R{\displaystyle I\subseteq \mathbb {R} }toR{\displaystyle \mathbb {R} }. Iffisconvex, then for any three pointsx,y,zinI,
If a functionfiscontinuous, then it is convex if and only if the above inequality holds for allx,y,zfromI{\displaystyle I}. Whenfis strictly convex, the inequality is strict except forx=y=z.[3]
It can be generalized to any finite numbernof points instead of 3, taken on the right-hand sidekat a time instead of 2 at a time:[4]
Letfbe a continuous function from an intervalI⊆R{\displaystyle I\subseteq \mathbb {R} }toR{\displaystyle \mathbb {R} }. Thenfisconvexif and only if, for any integersnandkwheren≥ 3 and2≤k≤n−1{\displaystyle 2\leq k\leq n-1}, and anynpointsx1,…,xn{\displaystyle x_{1},\dots ,x_{n}}fromI,
[5][6][7][8]
Popoviciu's inequality can also be generalized to a weighted inequality.[9]
Letfbe a continuous function from an intervalI⊆R{\displaystyle I\subseteq \mathbb {R} }toR{\displaystyle \mathbb {R} }. Letx1,x2,x3{\displaystyle x_{1},x_{2},x_{3}}be three points fromI{\displaystyle I}, and letw1,w2,w3{\displaystyle w_{1},w_{2},w_{3}}be three nonnegative reals such thatw2+w3≠0,w3+w1≠0{\displaystyle w_{2}+w_{3}\neq 0,w_{3}+w_{1}\neq 0}andw1+w2≠0{\displaystyle w_{1}+w_{2}\neq 0}. Then,
|
https://en.wikipedia.org/wiki/Popoviciu%27s_inequality
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.