text
stringlengths
16
172k
source
stringlengths
32
122
Instatistics, thegrouped Dirichlet distribution(GDD) is a multivariate generalization of theDirichlet distributionIt was first described by Ng et al. 2008.[1]The Grouped Dirichlet distribution arises in the analysis ofcategorical datawhere some observations could fall into any of a set of other 'crisp' category. For example, one may have a data set consisting of cases and controls under two different conditions. With complete data, the cross-classification of disease status forms a 2(case/control)-x-(condition/no-condition) table with cell probabilities If, however, the data includes, say, non-respondents which are known to be controls or cases, then the cross-classification of disease status forms a 2-x-3 table. The probability of the last column is the sum of the probabilities of the first two columns in each row, e.g. The GDD allows the full estimation of the cell probabilities under such aggregation conditions.[1] Consider the closed simplex setTn={(x1,…xn)|xi≥0,i=1,⋯,n,∑i=1nxn=1}{\displaystyle {\mathcal {T}}_{n}=\left\{\left(x_{1},\ldots x_{n}\right)\left|x_{i}\geq 0,i=1,\cdots ,n,\sum _{i=1}^{n}x_{n}=1\right.\right\}}andx∈Tn{\displaystyle \mathbf {x} \in {\mathcal {T}}_{n}}. Writingx−n=(x1,…,xn−1){\displaystyle \mathbf {x} _{-n}=\left(x_{1},\ldots ,x_{n-1}\right)}for the firstn−1{\displaystyle n-1}elements of a member ofTn{\displaystyle {\mathcal {T}}_{n}}, the distribution ofx{\displaystyle \mathbf {x} }for two partitions has a density function given by whereB⁡(a){\displaystyle \operatorname {\mathrm {B} } \left(\mathbf {a} \right)}is theMultivariate beta function. Ng et al.[1]went on to define anmpartition grouped Dirichlet distribution with density ofx−n{\displaystyle \mathbf {x} _{-n}}given by wheres=(s1,…,sm){\displaystyle \mathbf {s} =\left(s_{1},\ldots ,s_{m}\right)}is a vector of integers with0=s0<s1⩽⋯⩽sm=n{\displaystyle 0=s_{0}<s_{1}\leqslant \cdots \leqslant s_{m}=n}. Thenormalizing constantgiven by The authors went on to use these distributions in the context of three different applications in medical science.
https://en.wikipedia.org/wiki/Grouped_Dirichlet_distribution
Instatistics, theinverted Dirichlet distributionis a multivariate generalization of thebeta prime distribution, and is related to theDirichlet distribution. It was first described by Tiao and Cuttman in 1965.[1] The distribution has a density function given by The distribution has applications instatistical regressionand arises naturally when considering themultivariate Student distribution. It can be characterized[2]by itsmixed moments: provided thatqj>−νj,1⩽j⩽k{\displaystyle q_{j}>-\nu _{j},1\leqslant j\leqslant k}andνk+1>q1+…+qk{\displaystyle \nu _{k+1}>q_{1}+\ldots +q_{k}}. The inverted Dirichlet distribution is conjugate to thenegative multinomial distributionif a generalized form ofodds ratiois used instead of the categories' probabilities- if the negative multinomial parameter vector is given byp{\displaystyle p}, by changing parameters of the negative multinomial toxi=pip0,i=1…k{\displaystyle x_{i}={\frac {p_{i}}{p_{0}}},i=1\ldots k}wherep0=1−∑i=1kpi{\displaystyle p_{0}=1-\sum _{i=1}^{k}p_{i}}. T. Bdiri et al. have developed several models that use the inverted Dirichlet distribution to represent and model non-Gaussian data. They have introduced finite[3][4]and infinite[5]mixture modelsof inverted Dirichlet distributions using theNewton–Raphsontechnique to estimate the parameters and theDirichlet processto model infinite mixtures. T. Bdiri et al. have also used the inverted Dirichlet distribution to propose an approach to generateSupport Vector Machinekernels[6]basing onBayesian inferenceand another approach to establishhierarchical clustering.[7][8] Thisstatistics-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Inverted_Dirichlet_distribution
Inprobability theory,Dirichlet processes(after the distribution associated withPeter Gustav Lejeune Dirichlet) are a family ofstochastic processeswhoserealizationsareprobability distributions. In other words, a Dirichlet process is a probability distribution whose range is itself a set of probability distributions. It is often used inBayesian inferenceto describe thepriorknowledge about the distribution ofrandom variables—how likely it is that the random variables are distributed according to one or another particular distribution. As an example, a bag of 100 real-world dice is arandomprobability mass function(random pmf)—to sample this random pmf you put your hand in the bag and draw out a die, that is, you draw a pmf. A bag of dice manufactured using a crude process 100 years ago will likely have probabilities that deviate wildly from the uniform pmf, whereas a bag of state-of-the-art dice used by Las Vegas casinos may have barely perceptible imperfections. We can model the randomness of pmfs with the Dirichlet distribution.[1] The Dirichlet process is specified by a base distributionH{\displaystyle H}and a positivereal numberα{\displaystyle \alpha }called theconcentration parameter(also known as scaling parameter). The base distribution is theexpected valueof the process, i.e., the Dirichlet process draws distributions "around" the base distribution the way anormal distributiondraws real numbers around its mean. However, even if the base distribution iscontinuous, the distributions drawn from the Dirichlet process arealmost surelydiscrete. The scaling parameter specifies how strong this discretization is: in the limit ofα→0{\displaystyle \alpha \rightarrow 0}, the realizations are all concentrated at a single value, while in the limit ofα→∞{\displaystyle \alpha \rightarrow \infty }the realizations become continuous. Between the two extremes the realizations are discrete distributions with less and less concentration asα{\displaystyle \alpha }increases. The Dirichlet process can also be seen as the infinite-dimensional generalization of theDirichlet distribution. In the same way as the Dirichlet distribution is theconjugate priorfor thecategorical distribution, the Dirichlet process is the conjugate prior for infinite,nonparametricdiscrete distributions. A particularly important application of Dirichlet processes is as aprior probabilitydistribution ininfinite mixture models. The Dirichlet process was formally introduced byThomas S. Fergusonin 1973.[2]It has since been applied indata miningandmachine learning, among others fornatural language processing,computer visionandbioinformatics. Dirichlet processes are usually used when modelling data that tends to repeat previous values in a so-called "rich get richer" fashion. Specifically, suppose that the generation of valuesX1,X2,…{\displaystyle X_{1},X_{2},\dots }can be simulated by the following algorithm. a) With probabilityαα+n−1{\displaystyle {\frac {\alpha }{\alpha +n-1}}}drawXn{\displaystyle X_{n}}fromH{\displaystyle H}.b) With probabilitynxα+n−1{\displaystyle {\frac {n_{x}}{\alpha +n-1}}}setXn=x{\displaystyle X_{n}=x}, wherenx{\displaystyle n_{x}}is the number of previous observations ofx{\displaystyle x}.(Formally,nx:=|{j:Xj=xandj<n}|{\displaystyle n_{x}:=|\{j\colon X_{j}=x{\text{ and }}j<n\}|}where|⋅|{\displaystyle |\cdot |}denotes the number of elements in the set.) At the same time, another common model for data is that the observationsX1,X2,…{\displaystyle X_{1},X_{2},\dots }are assumed to beindependent and identically distributed(i.i.d.) according to some (random) distributionP{\displaystyle P}. The goal of introducing Dirichlet processes is to be able to describe the procedure outlined above in this i.i.d. model. TheX1,X2,…{\displaystyle X_{1},X_{2},\dots }observations in the algorithm are notindependent, since we have to consider the previous results when generating the next value. They are, however,exchangeable. This fact can be shown by calculating thejoint probability distributionof the observations and noticing that the resulting formula only depends on whichx{\displaystyle x}values occur among the observations and how many repetitions they each have. Because of this exchangeability,de Finetti's representation theoremapplies and it implies that the observationsX1,X2,…{\displaystyle X_{1},X_{2},\dots }areconditionally independentgiven a (latent) distributionP{\displaystyle P}. ThisP{\displaystyle P}is a random variable itself and has a distribution. This distribution (over distributions) is called a Dirichlet process (DP{\displaystyle \operatorname {DP} }). In summary, this means that we get an equivalent procedure to the above algorithm: In practice, however, drawing a concrete distributionP{\displaystyle P}is impossible, since its specification requires an infinite amount of information. This is a common phenomenon in the context of Bayesiannon-parametric statisticswhere a typical task is to learn distributions on function spaces, which involve effectively infinitely many parameters. The key insight is that in many applications the infinite-dimensional distributions appear only as an intermediary computational device and are not required for either the initial specification of prior beliefs or for the statement of the final inference. Given ameasurable setS, a base probability distributionHand a positivereal numberα{\displaystyle \alpha }, the Dirichlet processDP⁡(H,α){\displaystyle \operatorname {DP} (H,\alpha )}is astochastic processwhosesample path(orrealization, i.e. an infinite sequence ofrandom variatesdrawn from the process) is a probability distribution overS, such that the following holds. For any measurable finitepartitionofS, denoted{Bi}i=1n{\displaystyle \{B_{i}\}_{i=1}^{n}}, whereDir{\displaystyle \operatorname {Dir} }denotes theDirichlet distributionand the notationX∼D{\displaystyle X\sim D}means that the random variableX{\displaystyle X}has the distributionD{\displaystyle D}. There are several equivalent views of the Dirichlet process. Besides the formal definition above, the Dirichlet process can be defined implicitly through de Finetti's theorem as described in the first section; this is often called theChinese restaurant process. A third alternative is thestick-breaking process, which defines the Dirichlet process constructively by writing a distribution sampled from the process asf(x)=∑k=1∞βkδxk(x){\displaystyle f(x)=\sum _{k=1}^{\infty }\beta _{k}\delta _{x_{k}}(x)}, where{xk}k=1∞{\displaystyle \{x_{k}\}_{k=1}^{\infty }}are samples from the base distributionH{\displaystyle H},δxk{\displaystyle \delta _{x_{k}}}is anindicator functioncentered onxk{\displaystyle x_{k}}(zero everywhere except forδxk(xk)=1{\displaystyle \delta _{x_{k}}(x_{k})=1}) and theβk{\displaystyle \beta _{k}}are defined by a recursive scheme that repeatedly samples from thebeta distributionBeta⁡(1,α){\displaystyle \operatorname {Beta} (1,\alpha )}. A widely employed metaphor for the Dirichlet process is based on the so-calledChinese restaurant process. The metaphor is as follows: Imagine a Chinese restaurant in which customers enter. A new customer sits down at a table with a probability proportional to the number of customers already sitting there. Additionally, a customer opens a new table with a probability proportional to the scaling parameterα{\displaystyle \alpha }. After infinitely many customers entered, one obtains a probability distribution over infinitely many tables to be chosen. This probability distribution over the tables is a random sample of the probabilities of observations drawn from a Dirichlet process with scaling parameterα{\displaystyle \alpha }. If one associates draws from the base measureH{\displaystyle H}with every table, the resulting distribution over the sample spaceS{\displaystyle S}is a random sample of a Dirichlet process. The Chinese restaurant process is related to thePólya urn sampling schemewhich yields samples from finite Dirichlet distributions. Because customers sit at a table with a probability proportional to the number of customers already sitting at the table, two properties of the DP can be deduced: A third approach to the Dirichlet process is the so-called stick-breaking process view. Conceptually, this involves repeatedly breaking off and discarding a random fraction (sampled from a Beta distribution) of a "stick" that is initially of length 1. Remember that draws from a Dirichlet process are distributions over a setS{\displaystyle S}. As noted previously, the distribution drawn is discrete with probability 1. In the stick-breaking process view, we explicitly use the discreteness and give theprobability mass functionof this (random) discrete distribution as: whereδθk{\displaystyle \delta _{\theta _{k}}}is theindicator functionwhich evaluates to zero everywhere, except forδθk(θk)=1{\displaystyle \delta _{\theta _{k}}(\theta _{k})=1}. Since this distribution is random itself, its mass function is parameterized by two sets of random variables: the locations{θk}k=1∞{\displaystyle \left\{\theta _{k}\right\}_{k=1}^{\infty }}and the corresponding probabilities{βk}k=1∞{\displaystyle \left\{\beta _{k}\right\}_{k=1}^{\infty }}. In the following, we present without proof what these random variables are. The locationsθk{\displaystyle \theta _{k}}are independent and identically distributed according toH{\displaystyle H}, the base distribution of the Dirichlet process. The probabilitiesβk{\displaystyle \beta _{k}}are given by a procedure resembling the breaking of a unit-length stick (hence the name): whereβk′{\displaystyle \beta '_{k}}are independent random variables with thebeta distributionBeta⁡(1,α){\displaystyle \operatorname {Beta} (1,\alpha )}. The resemblance to 'stick-breaking' can be seen by consideringβk{\displaystyle \beta _{k}}as the length of a piece of a stick. We start with a unit-length stick and in each step we break off a portion of the remaining stick according toβk′{\displaystyle \beta '_{k}}and assign this broken-off piece toβk{\displaystyle \beta _{k}}. The formula can be understood by noting that after the firstk− 1 values have their portions assigned, the length of the remainder of the stick is∏i=1k−1(1−βi′){\displaystyle \prod _{i=1}^{k-1}\left(1-\beta '_{i}\right)}and this piece is broken according toβk′{\displaystyle \beta '_{k}}and gets assigned toβk{\displaystyle \beta _{k}}. The smallerα{\displaystyle \alpha }is, the less of the stick will be left for subsequent values (on average), yielding more concentrated distributions. The stick-breaking process is similar to the construction where one samples sequentially frommarginal beta distributionsin order to generate a sample from aDirichlet distribution.[4] Yet another way to visualize the Dirichlet process and Chinese restaurant process is as a modifiedPólya urn schemesometimes called theBlackwell–MacQueensampling scheme. Imagine that we start with an urn filled withα{\displaystyle \alpha }black balls. Then we proceed as follows: The resulting distribution over colours is the same as the distribution over tables in the Chinese restaurant process. Furthermore, when we draw a black ball, if rather than generating a new colour, we instead pick a random value from a base distributionH{\displaystyle H}and use that value to label the new ball, the resulting distribution over labels will be the same as the distribution over the values in a Dirichlet process. The Dirichlet Process can be used as a prior distribution to estimate the probability distribution that generates the data. In this section, we consider the model The Dirichlet Process distribution satisfiesprior conjugacy, posterior consistency, and theBernstein–von Mises theorem.[5] In this model, the posterior distribution is again a Dirichlet process. This means that the Dirichlet process is aconjugate priorfor this model. Theposterior distributionis given by wherePn{\displaystyle \mathbb {P} _{n}}is defined below. If we take thefrequentistview of probability, we believe there is a true probability distributionP0{\displaystyle P_{0}}that generated the data. Then it turns out that the Dirichlet process is consistent in theweak topology, which means that for every weak neighbourhoodU{\displaystyle U}ofP0{\displaystyle P_{0}}, the posterior probability ofU{\displaystyle U}converges to1{\displaystyle 1}. In order to interpret the credible sets as confidence sets, aBernstein–von Mises theoremis needed. In case of the Dirichlet process we compare the posterior distribution with theempirical processPn=1n∑i=1nδXi{\displaystyle \mathbb {P} _{n}={\frac {1}{n}}\sum _{i=1}^{n}\delta _{X_{i}}}. SupposeF{\displaystyle {\mathcal {F}}}is aP0{\displaystyle P_{0}}-Donsker class, i.e. for some Brownian BridgeGP0{\displaystyle G_{P_{0}}}. Suppose also that there exists a functionF{\displaystyle F}such thatF(x)≥supf∈Ff(x){\displaystyle F(x)\geq \sup _{f\in {\mathcal {F}}}f(x)}such that∫F2dH<∞{\displaystyle \int F^{2}\,\mathrm {d} H<\infty }, then,P0{\displaystyle P_{0}}almost surely This implies that credible sets you construct are asymptotic confidence sets, and the Bayesian inference based on the Dirichlet process is asymptotically also valid frequentist inference. To understand what Dirichlet processes are and the problem they solve we consider the example ofdata clustering. It is a common situation that data points are assumed to be distributed in a hierarchical fashion where each data point belongs to a (randomly chosen) cluster and the members of a cluster are further distributed randomly within that cluster. For example, we might be interested in how people will vote on a number of questions in an upcoming election. A reasonable model for this situation might be to classify each voter as a liberal, a conservative or a moderate and then model the event that a voter says "Yes" to any particular question as aBernoulli random variablewith the probability dependent on which political cluster they belong to. By looking at how votes were cast in previous years on similar pieces of legislation one could fit a predictive model using a simple clustering algorithm such ask-means. That algorithm, however, requires knowing in advance the number of clusters that generated the data. In many situations, it is not possible to determine this ahead of time, and even when we can reasonably assume a number of clusters we would still like to be able to check this assumption. For example, in the voting example above the division into liberal, conservative and moderate might not be finely tuned enough; attributes such as a religion, class or race could also be critical for modelling voter behaviour, resulting in more clusters in the model. As another example, we might be interested in modelling the velocities of galaxies using a simple model assuming that the velocities are clustered, for instance by assuming each velocity is distributed according to thenormal distributionvi∼N(μk,σ2){\displaystyle v_{i}\sim N(\mu _{k},\sigma ^{2})}, where thei{\displaystyle i}th observation belongs to thek{\displaystyle k}th cluster of galaxies with common expected velocity. In this case it is far from obvious how to determine a priori how many clusters (of common velocities) there should be and any model for this would be highly suspect and should be checked against the data. By using a Dirichlet process prior for the distribution of cluster means we circumvent the need to explicitly specify ahead of time how many clusters there are, although the concentration parameter still controls it implicitly. We consider this example in more detail. A first naive model is to presuppose that there areK{\displaystyle K}clusters of normally distributed velocities with common known fixedvarianceσ2{\displaystyle \sigma ^{2}}. Denoting the event that thei{\displaystyle i}th observation is in thek{\displaystyle k}th cluster aszi=k{\displaystyle z_{i}=k}we can write this model as: That is, we assume that the data belongs toK{\displaystyle K}distinct clusters with meansμk{\displaystyle \mu _{k}}and thatπk{\displaystyle \pi _{k}}is the (unknown) prior probability of a data point belonging to thek{\displaystyle k}th cluster. We assume that we have no initial information distinguishing the clusters, which is captured by the symmetric priorDir⁡(α/K⋅1K){\displaystyle \operatorname {Dir} \left(\alpha /K\cdot \mathbf {1} _{K}\right)}. HereDir{\displaystyle \operatorname {Dir} }denotes theDirichlet distributionand1K{\displaystyle \mathbf {1} _{K}}denotes a vector of lengthK{\displaystyle K}where each element is 1. We further assign independent and identical prior distributionsH(λ){\displaystyle H(\lambda )}to each of the cluster means, whereH{\displaystyle H}may be any parametric distribution with parameters denoted asλ{\displaystyle \lambda }. The hyper-parametersα{\displaystyle \alpha }andλ{\displaystyle \lambda }are taken to be known fixed constants, chosen to reflect our prior beliefs about the system. To understand the connection to Dirichlet process priors we rewrite this model in an equivalent but more suggestive form: Instead of imagining that each data point is first assigned a cluster and then drawn from the distribution associated to that cluster we now think of each observation being associated with parameterμ~i{\displaystyle {\tilde {\mu }}_{i}}drawn from some discrete distributionG{\displaystyle G}with support on theK{\displaystyle K}means. That is, we are now treating theμ~i{\displaystyle {\tilde {\mu }}_{i}}as being drawn from the random distributionG{\displaystyle G}and our prior information is incorporated into the model by the distribution over distributionsG{\displaystyle G}. We would now like to extend this model to work without pre-specifying a fixed number of clustersK{\displaystyle K}. Mathematically, this means we would like to select a random prior distributionG(μ~i)=∑k=1∞πkδμk(μ~i){\displaystyle G({\tilde {\mu }}_{i})=\sum _{k=1}^{\infty }\pi _{k}\delta _{\mu _{k}}({\tilde {\mu }}_{i})}where the values of the clusters meansμk{\displaystyle \mu _{k}}are again independently distributed according toH(λ){\displaystyle H\left(\lambda \right)}and the distribution overπk{\displaystyle \pi _{k}}is symmetric over the infinite set of clusters. This is exactly what is accomplished by the model: With this in hand we can better understand the computational merits of the Dirichlet process. Suppose that we wanted to drawn{\displaystyle n}observations from the naive model with exactlyK{\displaystyle K}clusters. A simple algorithm for doing this would be to drawK{\displaystyle K}values ofμk{\displaystyle \mu _{k}}fromH(λ){\displaystyle H(\lambda )}, a distributionπ{\displaystyle \pi }fromDir⁡(α/K⋅1K){\displaystyle \operatorname {Dir} \left(\alpha /K\cdot \mathbf {1} _{K}\right)}and then for each observation independently sample the clusterk{\displaystyle k}with probabilityπk{\displaystyle \pi _{k}}and the value of the observation according toN(μk,σ2){\displaystyle N\left(\mu _{k},\sigma ^{2}\right)}. It is easy to see that this algorithm does not work in case where we allow infinite clusters because this would require sampling an infinite dimensional parameterπ{\displaystyle {\boldsymbol {\pi }}}. However, it is still possible to sample observationsvi{\displaystyle v_{i}}. One can e.g. use the Chinese restaurant representation described below and calculate the probability for used clusters and a new cluster to be created. This avoids having to explicitly specifyπ{\displaystyle {\boldsymbol {\pi }}}. Other solutions are based on a truncation of clusters: A (high) upper bound to the true number of clusters is introduced and cluster numbers higher than the lower bound are treated as one cluster. Fitting the model described above based on observed dataD{\displaystyle D}means finding theposterior distributionp(π,μ∣D){\displaystyle p\left({\boldsymbol {\pi }},{\boldsymbol {\mu }}\mid D\right)}over cluster probabilities and their associated means. In the infinite dimensional case it is obviously impossible to write down the posterior explicitly. It is, however, possible to draw samples from this posterior using a modifiedGibbs sampler.[6]This is the critical fact that makes the Dirichlet process prior useful forinference. Dirichlet processes are frequently used inBayesiannonparametric statistics. "Nonparametric" here does not mean a parameter-less model, rather a model in which representations grow as more data are observed. Bayesian nonparametric models have gained considerable popularity in the field ofmachine learningbecause of the above-mentioned flexibility, especially inunsupervised learning. In a Bayesian nonparametric model, the prior and posterior distributions are not parametric distributions, but stochastic processes.[7]The fact that the Dirichlet distribution is a probability distribution on thesimplexof sets of non-negative numbers that sum to one makes it a good candidate to model distributions over distributions or distributions over functions. Additionally, the nonparametric nature of this model makes it an ideal candidate for clustering problems where the distinct number of clusters is unknown beforehand. In addition, the Dirichlet process has also been used for developing a mixture of expert models, in the context of supervised learning algorithms (regression or classification settings). For instance, mixtures of Gaussian process experts, where the number of required experts must be inferred from the data.[8][9] As draws from a Dirichlet process are discrete, an important use is as aprior probabilityininfinite mixture models. In this case,S{\displaystyle S}is the parametric set of component distributions. The generative process is therefore that a sample is drawn from a Dirichlet process, and for each data point, in turn, a value is drawn from this sample distribution and used as the component distribution for that data point. The fact that there is no limit to the number of distinct components which may be generated makes this kind of model appropriate for the case when the number of mixture components is not well-defined in advance. For example, the infinite mixture of Gaussians model,[10]as well as associated mixture regression models, e.g.[11] The infinite nature of these models also lends them tonatural language processingapplications, where it is often desirable to treat the vocabulary as an infinite, discrete set. The Dirichlet Process can also be used for nonparametric hypothesis testing, i.e. to develop Bayesian nonparametric versions of the classical nonparametric hypothesis tests, e.g.sign test,Wilcoxon rank-sum test,Wilcoxon signed-rank test, etc. For instance, Bayesian nonparametric versions of the Wilcoxon rank-sum test and the Wilcoxon signed-rank test have been developed by using theimprecise Dirichlet process, a prior ignorance Dirichlet process.[citation needed]
https://en.wikipedia.org/wiki/Dirichlet_process
Instatistics, thematrix variate Dirichlet distributionis a generalization of thematrix variate beta distributionand of theDirichlet distribution. SupposeU1,…,Ur{\displaystyle U_{1},\ldots ,U_{r}}arep×p{\displaystyle p\times p}positive definite matriceswithIp−∑i=1rUi{\displaystyle I_{p}-\sum _{i=1}^{r}U_{i}}also positive-definite, whereIp{\displaystyle I_{p}}is thep×p{\displaystyle p\times p}identity matrix. Then we say that theUi{\displaystyle U_{i}}have a matrix variate Dirichlet distribution,(U1,…,Ur)∼Dp(a1,…,ar;ar+1){\displaystyle \left(U_{1},\ldots ,U_{r}\right)\sim D_{p}\left(a_{1},\ldots ,a_{r};a_{r+1}\right)}, if their jointprobability density functionis whereai>(p−1)/2,i=1,…,r+1{\displaystyle a_{i}>(p-1)/2,i=1,\ldots ,r+1}andβp(⋯){\displaystyle \beta _{p}\left(\cdots \right)}is themultivariate beta function. If we writeUr+1=Ip−∑i=1rUi{\displaystyle U_{r+1}=I_{p}-\sum _{i=1}^{r}U_{i}}then the PDF takes the simpler form on the understanding that∑i=1r+1Ui=Ip{\displaystyle \sum _{i=1}^{r+1}U_{i}=I_{p}}. SupposeSi∼Wp(ni,Σ),i=1,…,r+1{\displaystyle S_{i}\sim W_{p}\left(n_{i},\Sigma \right),i=1,\ldots ,r+1}are independently distributedWishartp×p{\displaystyle p\times p}positive definite matrices. Then, definingUi=S−1/2Si(S−1/2)T{\displaystyle U_{i}=S^{-1/2}S_{i}\left(S^{-1/2}\right)^{T}}(whereS=∑i=1r+1Si{\displaystyle S=\sum _{i=1}^{r+1}S_{i}}is the sum of the matrices andS1/2(S−1/2)T{\displaystyle S^{1/2}\left(S^{-1/2}\right)^{T}}is any reasonable factorization ofS{\displaystyle S}), we have If(U1,…,Ur)∼Dp(a1,…,ar+1){\displaystyle \left(U_{1},\ldots ,U_{r}\right)\sim D_{p}\left(a_{1},\ldots ,a_{r+1}\right)}, and ifs≤r{\displaystyle s\leq r}, then: Also, with the same notation as above, the density of(Us+1,…,Ur)|(U1,…,Us){\displaystyle \left(U_{s+1},\ldots ,U_{r}\right)\left|\left(U_{1},\ldots ,U_{s}\right)\right.}is given by where we writeUr+1=Ip−∑i=1rUi{\displaystyle U_{r+1}=I_{p}-\sum _{i=1}^{r}U_{i}}. Suppose(U1,…,Ur)∼Dp(a1,…,ar+1){\displaystyle \left(U_{1},\ldots ,U_{r}\right)\sim D_{p}\left(a_{1},\ldots ,a_{r+1}\right)}and suppose thatS1,…,St{\displaystyle S_{1},\ldots ,S_{t}}is a partition of[r+1]={1,…r+1}{\displaystyle \left[r+1\right]=\left\{1,\ldots r+1\right\}}(that is,∪i=1tSi=[r+1]{\displaystyle \cup _{i=1}^{t}S_{i}=\left[r+1\right]}andSi∩Sj=∅{\displaystyle S_{i}\cap S_{j}=\emptyset }ifi≠j{\displaystyle i\neq j}). Then, writingU(j)=∑i∈SjUi{\displaystyle U_{(j)}=\sum _{i\in S_{j}}U_{i}}anda(j)=∑i∈Sjai{\displaystyle a_{(j)}=\sum _{i\in S_{j}}a_{i}}(withUr+1=Ip−∑i=1rUr{\displaystyle U_{r+1}=I_{p}-\sum _{i=1}^{r}U_{r}}), we have: Suppose(U1,…,Ur)∼Dp(a1,…,ar+1){\displaystyle \left(U_{1},\ldots ,U_{r}\right)\sim D_{p}\left(a_{1},\ldots ,a_{r+1}\right)}. Define whereU11(i){\displaystyle U_{11(i)}}isp1×p1{\displaystyle p_{1}\times p_{1}}andU22(i){\displaystyle U_{22(i)}}isp2×p2{\displaystyle p_{2}\times p_{2}}. Writing theSchur complementU22⋅1(i)=U21(i)U11(i)−1U12(i){\displaystyle U_{22\cdot 1(i)}=U_{21(i)}U_{11(i)}^{-1}U_{12(i)}}we have and A. K. Gupta and D. K. Nagar 1999. "Matrix variate distributions". Chapman and Hall. Thisstatistics-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Matrix_variate_Dirichlet_distribution
Thepartition functionorconfiguration integral, as used inprobability theory,information theoryanddynamical systems, is a generalization of the definition of apartition function in statistical mechanics. It is a special case of anormalizing constantin probability theory, for theBoltzmann distribution. The partition function occurs in many problems of probability theory because, in situations where there is a natural symmetry, its associatedprobability measure, theGibbs measure, has theMarkov property. This means that the partition function occurs not only in physical systems with translation symmetry, but also in such varied settings as neural networks (theHopfield network), and applications such asgenomics,corpus linguisticsandartificial intelligence, which employMarkov networks, andMarkov logic networks. The Gibbs measure is also the unique measure that has the property of maximizing theentropyfor a fixed expectation value of the energy; this underlies the appearance of the partition function inmaximum entropy methodsand the algorithms derived therefrom. The partition function ties together many different concepts, and thus offers a general framework in which many different kinds of quantities may be calculated. In particular, it shows how to calculateexpectation valuesandGreen's functions, forming a bridge toFredholm theory. It also provides a natural setting for theinformation geometryapproach to information theory, where theFisher information metriccan be understood to be acorrelation functionderived from the partition function; it happens to define aRiemannian manifold. When the setting for random variables is oncomplex projective spaceorprojective Hilbert space, geometrized with theFubini–Study metric, the theory ofquantum mechanicsand more generallyquantum field theoryresults. In these theories, the partition function is heavily exploited in thepath integral formulation, with great success, leading to many formulas nearly identical to those reviewed here. However, because the underlying measure space is complex-valued, as opposed to the real-valuedsimplexof probability theory, an extra factor ofiappears in many formulas. Tracking this factor is troublesome, and is not done here. This article focuses primarily on classical probability theory, where the sum of probabilities total to one. Given a set ofrandom variablesXi{\displaystyle X_{i}}taking on valuesxi{\displaystyle x_{i}}, and some sort ofpotential functionorHamiltonianH(x1,x2,…){\displaystyle H(x_{1},x_{2},\dots )}, the partition function is defined as Z(β)=∑xiexp⁡(−βH(x1,x2,…)){\displaystyle Z(\beta )=\sum _{x_{i}}\exp \left(-\beta H(x_{1},x_{2},\dots )\right)} The functionHis understood to be a real-valued function on the space of states{X1,X2,…}{\displaystyle \{X_{1},X_{2},\dots \}}, whileβ{\displaystyle \beta }is a real-valued free parameter (conventionally, theinverse temperature). The sum over thexi{\displaystyle x_{i}}is understood to be a sum over all possible values that each of the random variablesXi{\displaystyle X_{i}}may take. Thus, the sum is to be replaced by anintegralwhen theXi{\displaystyle X_{i}}are continuous, rather than discrete. Thus, one writes Z(β)=∫exp⁡(−βH(x1,x2,…))dx1dx2⋯{\displaystyle Z(\beta )=\int \exp \left(-\beta H(x_{1},x_{2},\dots )\right)\,dx_{1}\,dx_{2}\cdots } for the case of continuously-varyingXi{\displaystyle X_{i}}. WhenHis anobservable, such as a finite-dimensionalmatrixor an infinite-dimensionalHilbert spaceoperatoror element of aC-star algebra, it is common to express the summation as atrace, so that Z(β)=tr⁡(exp⁡(−βH)){\displaystyle Z(\beta )=\operatorname {tr} \left(\exp \left(-\beta H\right)\right)} WhenHis infinite-dimensional, then, for the above notation to be valid, the argument must betrace class, that is, of a form such that the summation exists and is bounded. The number of variablesXi{\displaystyle X_{i}}need not becountable, in which case the sums are to be replaced byfunctional integrals. Although there are many notations for functional integrals, a common one would be Z=∫Dφexp⁡(−βH[φ]){\displaystyle Z=\int {\mathcal {D}}\varphi \exp \left(-\beta H[\varphi ]\right)} Such is the case for thepartition function in quantum field theory. A common, useful modification to the partition function is to introduce auxiliary functions. This allows, for example, the partition function to be used as agenerating functionforcorrelation functions. This is discussed in greater detail below. The role or meaning of the parameterβ{\displaystyle \beta }can be understood in a variety of different ways. In classical thermodynamics, it is aninverse temperature. More generally, one would say that it is the variable that isconjugateto some (arbitrary) functionH{\displaystyle H}of the random variablesX{\displaystyle X}. The wordconjugatehere is used in the sense of conjugategeneralized coordinatesinLagrangian mechanics, thus, properlyβ{\displaystyle \beta }is aLagrange multiplier. It is not uncommonly called thegeneralized force. All of these concepts have in common the idea that one value is meant to be kept fixed, as others, interconnected in some complicated way, are allowed to vary. In the current case, the value to be kept fixed is theexpectation valueofH{\displaystyle H}, even as many differentprobability distributionscan give rise to exactly this same (fixed) value. For the general case, one considers a set of functions{Hk(x1,…)}{\displaystyle \{H_{k}(x_{1},\dots )\}}that each depend on the random variablesXi{\displaystyle X_{i}}. These functions are chosen because one wants to hold their expectation values constant, for one reason or another. To constrain the expectation values in this way, one applies the method ofLagrange multipliers. In the general case,maximum entropy methodsillustrate the manner in which this is done. Some specific examples are in order. In basic thermodynamics problems, when using thecanonical ensemble, the use of just one parameterβ{\displaystyle \beta }reflects the fact that there is only one expectation value that must be held constant: thefree energy(due toconservation of energy). For chemistry problems involving chemical reactions, thegrand canonical ensembleprovides the appropriate foundation, and there are two Lagrange multipliers. One is to hold the energy constant, and another, thefugacity, is to hold the particle count constant (as chemical reactions involve the recombination of a fixed number of atoms). For the general case, one has Z(β)=∑xiexp⁡(−∑kβkHk(xi)){\displaystyle Z(\beta )=\sum _{x_{i}}\exp \left(-\sum _{k}\beta _{k}H_{k}(x_{i})\right)} withβ=(β1,β2,…){\displaystyle \beta =(\beta _{1},\beta _{2},\dots )}a point in a space. For a collection of observablesHk{\displaystyle H_{k}}, one would write Z(β)=tr⁡[exp⁡(−∑kβkHk)]{\displaystyle Z(\beta )=\operatorname {tr} \left[\,\exp \left(-\sum _{k}\beta _{k}H_{k}\right)\right]} As before, it is presumed that the argument oftristrace class. The correspondingGibbs measurethen provides a probability distribution such that the expectation value of eachHk{\displaystyle H_{k}}is a fixed value. More precisely, one has ∂∂βk(−log⁡Z)=⟨Hk⟩=E[Hk]{\displaystyle {\frac {\partial }{\partial \beta _{k}}}\left(-\log Z\right)=\langle H_{k}\rangle =\mathrm {E} \left[H_{k}\right]} with the angle brackets⟨Hk⟩{\displaystyle \langle H_{k}\rangle }denoting the expected value ofHk{\displaystyle H_{k}}, andE⁡[⋅]{\displaystyle \operatorname {E} [\,\cdot \,]}being a common alternative notation. A precise definition of this expectation value is given below. Although the value ofβ{\displaystyle \beta }is commonly taken to be real, it need not be, in general; this is discussed in the sectionNormalizationbelow. The values ofβ{\displaystyle \beta }can be understood to be the coordinates of points in a space; this space is in fact amanifold, as sketched below. The study of these spaces as manifolds constitutes the field ofinformation geometry. The potential function itself commonly takes the form of a sum: H(x1,x2,…)=∑sV(s){\displaystyle H(x_{1},x_{2},\dots )=\sum _{s}V(s)\,} where the sum oversis a sum over some subset of thepower setP(X) of the setX={x1,x2,…}{\displaystyle X=\lbrace x_{1},x_{2},\dots \rbrace }. For example, instatistical mechanics, such as theIsing model, the sum is over pairs of nearest neighbors. In probability theory, such asMarkov networks, the sum might be over thecliquesof a graph; so, for the Ising model and otherlattice models, the maximal cliques are edges. The fact that the potential function can be written as a sum usually reflects the fact that it is invariant under theactionof agroup symmetry, such astranslational invariance. Such symmetries can be discrete or continuous; they materialize in thecorrelation functionsfor the random variables (discussed below). Thus a symmetry in the Hamiltonian becomes a symmetry of the correlation function (and vice versa). This symmetry has a critically important interpretation in probability theory: it implies that theGibbs measurehas theMarkov property; that is, it is independent of the random variables in a certain way, or, equivalently, the measure is identical on theequivalence classesof the symmetry. This leads to the widespread appearance of the partition function in problems with the Markov property, such asHopfield networks. The value of the expressionexp⁡(−βH(x1,x2,…)){\displaystyle \exp \left(-\beta H(x_{1},x_{2},\dots )\right)} can be interpreted as a likelihood that a specificconfigurationof values(x1,x2,…){\displaystyle (x_{1},x_{2},\dots )}occurs in the system. Thus, given a specific configuration(x1,x2,…){\displaystyle (x_{1},x_{2},\dots )}, P(x1,x2,…)=1Z(β)exp⁡(−βH(x1,x2,…)){\displaystyle P(x_{1},x_{2},\dots )={\frac {1}{Z(\beta )}}\exp \left(-\beta H(x_{1},x_{2},\dots )\right)} is theprobabilityof the configuration(x1,x2,…){\displaystyle (x_{1},x_{2},\dots )}occurring in the system, which is now properly normalized so that0≤P(x1,x2,…)≤1{\displaystyle 0\leq P(x_{1},x_{2},\dots )\leq 1}, and such that the sum over all configurations totals to one. As such, the partition function can be understood to provide ameasure(aprobability measure) on theprobability space; formally, it is called theGibbs measure. It generalizes the narrower concepts of thegrand canonical ensembleandcanonical ensemblein statistical mechanics. There exists at least one configuration(x1,x2,…){\displaystyle (x_{1},x_{2},\dots )}for which the probability is maximized; this configuration is conventionally called theground state. If the configuration is unique, the ground state is said to benon-degenerate, and the system is said to beergodic; otherwise the ground state isdegenerate. The ground state may or may not commute with the generators of the symmetry; if commutes, it is said to be aninvariant measure. When it does not commute, the symmetry is said to bespontaneously broken. Conditions under which a ground state exists and is unique are given by theKarush–Kuhn–Tucker conditions; these conditions are commonly used to justify the use of the Gibbs measure in maximum-entropy problems.[citation needed] The values taken byβ{\displaystyle \beta }depend on themathematical spaceover which the random field varies. Thus, real-valued random fields take values on asimplex: this is the geometrical way of saying that the sum of probabilities must total to one. For quantum mechanics, the random variables range overcomplex projective space(or complex-valuedprojective Hilbert space), where the random variables are interpreted asprobability amplitudes. The emphasis here is on the wordprojective, as the amplitudes are still normalized to one. The normalization for the potential function is theJacobianfor the appropriate mathematical space: it is 1 for ordinary probabilities, andifor Hilbert space; thus, inquantum field theory, one seesitH{\displaystyle itH}in the exponential, rather thanβH{\displaystyle \beta H}. The partition function is very heavily exploited in thepath integral formulationof quantum field theory, to great effect. The theory there is very nearly identical to that presented here, aside from this difference, and the fact that it is usually formulated on four-dimensional space-time, rather than in a general way. The partition function is commonly used as aprobability-generating functionforexpectation valuesof various functions of the random variables. So, for example, takingβ{\displaystyle \beta }as an adjustable parameter, then the derivative oflog⁡(Z(β)){\displaystyle \log(Z(\beta ))}with respect toβ{\displaystyle \beta } E⁡[H]=⟨H⟩=−∂log⁡(Z(β))∂β{\displaystyle \operatorname {E} [H]=\langle H\rangle =-{\frac {\partial \log(Z(\beta ))}{\partial \beta }}} gives the average (expectation value) ofH. In physics, this would be called the averageenergyof the system. Given the definition of the probability measure above, the expectation value of any functionfof the random variablesXmay now be written as expected: so, for discrete-valuedX, one writes⟨f⟩=∑xif(x1,x2,…)P(x1,x2,…)=1Z(β)∑xif(x1,x2,…)exp⁡(−βH(x1,x2,…)){\displaystyle {\begin{aligned}\langle f\rangle &=\sum _{x_{i}}f(x_{1},x_{2},\dots )P(x_{1},x_{2},\dots )\\&={\frac {1}{Z(\beta )}}\sum _{x_{i}}f(x_{1},x_{2},\dots )\exp \left(-\beta H(x_{1},x_{2},\dots )\right)\end{aligned}}} The above notation makes sense for a finite number of discrete random variables. In more general settings, the summations should be replaced with integrals over aprobability space. Thus, for example, theentropyis given by S=−kB⟨ln⁡P⟩=−kB∑xiP(x1,x2,…)ln⁡P(x1,x2,…)=kB(β⟨H⟩+log⁡Z(β)){\displaystyle {\begin{aligned}S&=-k_{\text{B}}\langle \ln P\rangle \\[1ex]&=-k_{\text{B}}\sum _{x_{i}}P(x_{1},x_{2},\dots )\ln P(x_{1},x_{2},\dots )\\&=k_{\text{B}}\left(\beta \langle H\rangle +\log Z(\beta )\right)\end{aligned}}} The Gibbs measure is the unique statistical distribution that maximizes the entropy for a fixed expectation value of the energy; this underlies its use inmaximum entropy methods. The pointsβ{\displaystyle \beta }can be understood to form a space, and specifically, amanifold. Thus, it is reasonable to ask about the structure of this manifold; this is the task ofinformation geometry. Multiple derivatives with regard to the Lagrange multipliers gives rise to a positive semi-definitecovariance matrixgij(β)=∂2∂βi∂βj(−log⁡Z(β))=⟨(Hi−⟨Hi⟩)(Hj−⟨Hj⟩)⟩{\displaystyle g_{ij}(\beta )={\frac {\partial ^{2}}{\partial \beta ^{i}\partial \beta ^{j}}}\left(-\log Z(\beta )\right)=\langle \left(H_{i}-\langle H_{i}\rangle \right)\left(H_{j}-\langle H_{j}\rangle \right)\rangle }This matrix is positive semi-definite, and may be interpreted as ametric tensor, specifically, aRiemannian metric. Equipping the space of Lagrange multipliers with a metric in this way turns it into aRiemannian manifold.[1]The study of such manifolds is referred to asinformation geometry; the metric above is theFisher information metric. Here,β{\displaystyle \beta }serves as a coordinate on the manifold. It is interesting to compare the above definition to the simplerFisher information, from which it is inspired. That the above defines the Fisher information metric can be readily seen by explicitly substituting for the expectation value:gij(β)=⟨(Hi−⟨Hi⟩)(Hj−⟨Hj⟩)⟩=∑xP(x)(Hi−⟨Hi⟩)(Hj−⟨Hj⟩)=∑xP(x)(Hi+∂log⁡Z∂βi)(Hj+∂log⁡Z∂βj)=∑xP(x)∂log⁡P(x)∂βi∂log⁡P(x)∂βj{\displaystyle {\begin{aligned}g_{ij}(\beta )&=\left\langle \left(H_{i}-\left\langle H_{i}\right\rangle \right)\left(H_{j}-\left\langle H_{j}\right\rangle \right)\right\rangle \\&=\sum _{x}P(x)\left(H_{i}-\left\langle H_{i}\right\rangle \right)\left(H_{j}-\left\langle H_{j}\right\rangle \right)\\&=\sum _{x}P(x)\left(H_{i}+{\frac {\partial \log Z}{\partial \beta _{i}}}\right)\left(H_{j}+{\frac {\partial \log Z}{\partial \beta _{j}}}\right)\\&=\sum _{x}P(x){\frac {\partial \log P(x)}{\partial \beta ^{i}}}{\frac {\partial \log P(x)}{\partial \beta ^{j}}}\\\end{aligned}}} where we've writtenP(x){\displaystyle P(x)}forP(x1,x2,…){\displaystyle P(x_{1},x_{2},\dots )}and the summation is understood to be over all values of all random variablesXk{\displaystyle X_{k}}. For continuous-valued random variables, the summations are replaced by integrals, of course. Curiously, theFisher information metriccan also be understood as the flat-spaceEuclidean metric, after appropriate change of variables, as described in the main article on it. When theβ{\displaystyle \beta }are complex-valued, the resulting metric is theFubini–Study metric. When written in terms ofmixed states, instead ofpure states, it is known as theBures metric. By introducing artificial auxiliary functionsJk{\displaystyle J_{k}}into the partition function, it can then be used to obtain the expectation value of the random variables. Thus, for example, by writing Z(β,J)=Z(β,J1,J2,…)=∑xiexp⁡(−βH(x1,x2,…)+∑nJnxn){\displaystyle {\begin{aligned}Z(\beta ,J)&=Z(\beta ,J_{1},J_{2},\dots )\\&=\sum _{x_{i}}\exp \left(-\beta H(x_{1},x_{2},\dots )+\sum _{n}J_{n}x_{n}\right)\end{aligned}}} one then hasE⁡[xk]=⟨xk⟩=∂∂Jklog⁡Z(β,J)|J=0{\displaystyle \operatorname {E} [x_{k}]=\langle x_{k}\rangle =\left.{\frac {\partial }{\partial J_{k}}}\log Z(\beta ,J)\right|_{J=0}} as the expectation value ofxk{\displaystyle x_{k}}. In thepath integral formulationofquantum field theory, these auxiliary functions are commonly referred to assource fields. Multiple differentiations lead to theconnected correlation functionsof the random variables. Thus the correlation functionC(xj,xk){\displaystyle C(x_{j},x_{k})}between variablesxj{\displaystyle x_{j}}andxk{\displaystyle x_{k}}is given by: C(xj,xk)=∂∂Jj∂∂Jklog⁡Z(β,J)|J=0{\displaystyle C(x_{j},x_{k})=\left.{\frac {\partial }{\partial J_{j}}}{\frac {\partial }{\partial J_{k}}}\log Z(\beta ,J)\right|_{J=0}} For the case whereHcan be written as aquadratic forminvolving adifferential operator, that is, as H=12∑nxnDxn{\displaystyle H={\frac {1}{2}}\sum _{n}x_{n}Dx_{n}} then partition function can be understood to be a sum orintegralover Gaussians. The correlation functionC(xj,xk){\displaystyle C(x_{j},x_{k})}can be understood to be theGreen's functionfor the differential operator (and generally giving rise toFredholm theory). In the quantum field theory setting, such functions are referred to aspropagators; higher order correlators are called n-point functions; working with them defines theeffective actionof a theory. When the random variables are anti-commutingGrassmann numbers, then the partition function can be expressed as a determinant of the operatorD. This is done by writing it as aBerezin integral(also called Grassmann integral). Partition functions are used to discusscritical scaling,universalityand are subject to therenormalization group.
https://en.wikipedia.org/wiki/Partition_function_(mathematics)
Inquantum field theory,partition functionsaregenerating functionalsforcorrelation functions, making them key objects of study in thepath integral formalism. They are theimaginary timeversions ofstatistical mechanicspartition functions, giving rise to a close connection between these two areas of physics. Partition functions can rarely be solved for exactly, althoughfree theoriesdo admit such solutions. Instead, aperturbativeapproach is usually implemented, this being equivalent to summing overFeynman diagrams. In ad{\displaystyle d}-dimensional field theory with a realscalar fieldϕ{\displaystyle \phi }andactionS[ϕ]{\displaystyle S[\phi ]}, the partition function is defined in the path integral formalism as thefunctional[1] whereJ(x){\displaystyle J(x)}is a fictitioussource current. It acts as a generating functional for arbitrary n-point correlation functions The derivatives used here arefunctional derivativesrather than regular derivatives since they are acting on functionals rather than regular functions. From this it follows that an equivalent expression for the partition function reminiscent to apower seriesin source currents is given by[2] Incurved spacetimesthere is an added subtlety that must be dealt with due to the fact that the initialvacuum stateneed not be the same as the final vacuum state.[3]Partition functions can also be constructed for composite operators in the same way as they are for fundamental fields. Correlation functions of these operators can then be calculated as functional derivatives of these functionals.[4]For example, the partition function for a composite operatorO(x){\displaystyle {\mathcal {O}}(x)}is given by Knowing the partition function completely solves the theory since it allows for the direct calculation of all of its correlation functions. However, there are very few cases where the partition function can be calculated exactly. While free theories do admit exact solutions, interacting theories generally do not. Instead the partition function can be evaluated at weakcouplingperturbatively, which amounts to regular perturbation theory using Feynman diagrams withJ{\displaystyle J}insertions on the external legs.[5]The symmetry factors for these types of diagrams differ from those of correlation functions since all external legs have identicalJ{\displaystyle J}insertions that can be interchanged, whereas the external legs of correlation functions are all fixed at specific coordinates and are therefore fixed. By performing aWick transformation, the partition function can be expressed inEuclideanspacetime as[6] whereSE{\displaystyle S_{E}}is the Euclidean action andxE{\displaystyle x_{E}}are Euclidean coordinates. This form is closely connected to the partition function in statistical mechanics, especially since the EuclideanLagrangianis usually bounded from below in which case it can be interpreted as anenergydensity. It also allows for the interpretation of the exponential factor as a statistical weight for the field configurations, with larger fluctuations in the gradient or field values leading to greater suppression. This connection with statistical mechanics also lends additional intuition for how correlation functions should behave in a quantum field theory. Most of the same principles of the scalar case hold for more general theories with additional fields. Each field requires the introduction of its own fictitious current, withantiparticlefields requiring their own separate currents. Acting on the partition function with a derivative of a current brings down its associated field from the exponential, allowing for the construction of arbitrary correlation functions. After differentiation, the currents are set to zero when correlation functions in a vacuum state are desired, but the currents can also be set to take on particular values to yield correlation functions in non-vanishing background fields. For partition functions withGrassmannvaluedfermion fields, the sources are also Grassmann valued.[7]For example, a theory with a singleDirac fermionψ(x){\displaystyle \psi (x)}requires the introduction of two Grassmann currentsη{\displaystyle \eta }andη¯{\displaystyle {\bar {\eta }}}so that the partition function is Functional derivatives with respect toη¯{\displaystyle {\bar {\eta }}}give fermion fields while derivatives with respect toη{\displaystyle \eta }give anti-fermion fields in the correlation functions. Athermal field theoryattemperatureT{\displaystyle T}is equivalent in Euclidean formalism to a theory with acompactifiedtemporal direction of lengthβ=1/T{\displaystyle \beta =1/T}. Partition functions must be modified appropriately by imposing periodicity conditions on the fields and the Euclidean spacetime integrals This partition function can be taken as the definition of the thermal field theory in imaginary time formalism.[8]Correlation functions are acquired from the partition function through the usual functional derivatives with respect to currents The partition function can be solved exactly in free theories bycompleting the squarein terms of the fields. Since a shift by a constant does not affect the path integralmeasure, this allows for separating the partition function into a constant of proportionalityN{\displaystyle N}arising from the path integral, and a second term that only depends on the current. For the scalar theory this yields whereΔF(x−y){\displaystyle \Delta _{F}(x-y)}is the position space Feynmanpropagator This partition function fully determines the free field theory. In the case of a theory with a single free Dirac fermion, completing the square yields a partition function of the form whereΔD(x−y){\displaystyle \Delta _{D}(x-y)}is the position space Dirac propagator
https://en.wikipedia.org/wiki/Partition_function_(quantum_field_theory)
Inmechanics, thevirial theoremprovides a general equation that relates the average over time of the totalkinetic energyof a stable system of discrete particles, bound by aconservative force(where theworkdone is independent of path), with that of the totalpotential energyof the system. Mathematically, thetheoremstates that⟨T⟩=−12∑k=1N⟨Fk⋅rk⟩,{\displaystyle \langle T\rangle =-{\frac {1}{2}}\,\sum _{k=1}^{N}\langle \mathbf {F} _{k}\cdot \mathbf {r} _{k}\rangle ,}whereTis the total kinetic energy of theNparticles,Fkrepresents theforceon thekth particle, which is located at positionrk, andangle bracketsrepresent the average over time of the enclosed quantity. The wordvirialfor the right-hand side of the equation derives fromvis, theLatinword for "force" or "energy", and was given its technical definition byRudolf Clausiusin 1870.[1] The significance of the virial theorem is that it allows the average total kinetic energy to be calculated even for very complicated systems that defy an exact solution, such as those considered instatistical mechanics; this average total kinetic energy is related to thetemperatureof the system by theequipartition theorem. However, the virial theorem does not depend on the notion of temperature and holds even for systems that are not inthermal equilibrium. The virial theorem has been generalized in various ways, most notably to atensorform. If the force between any two particles of the system results from apotential energyV(r) =αrnthat is proportional to some powernof theinterparticle distancer, the virial theorem takes the simple form2⟨T⟩=n⟨VTOT⟩.{\displaystyle 2\langle T\rangle =n\langle V_{\text{TOT}}\rangle .} Thus, twice the average total kinetic energy⟨T⟩equalsntimes the average total potential energy⟨VTOT⟩. WhereasV(r)represents the potential energy between two particles of distancer,VTOTrepresents the total potential energy of the system, i.e., the sum of the potential energyV(r)over all pairs of particles in the system. A common example of such a system is a star held together by its own gravity, wheren= −1. In 1870,Rudolf Clausiusdelivered the lecture "On a Mechanical Theorem Applicable to Heat" to the Association for Natural and Medical Sciences of the Lower Rhine, following a 20-year study of thermodynamics. The lecture stated that the meanvis vivaof the system is equal to its virial, or that the average kinetic energy is one half of the average potential energy. The virial theorem can be obtained directly fromLagrange's identity[moved resource?]as applied in classical gravitational dynamics, the original form of which was included in Lagrange's "Essay on the Problem of Three Bodies" published in 1772.Carl Jacobi'sgeneralization of the identity toNbodies and to the present form of Laplace's identity closely resembles the classical virial theorem. However, the interpretations leading to the development of the equations were very different, since at the time of development, statistical dynamics had not yet unified the separate studies of thermodynamics and classical dynamics.[2]The theorem was later utilized, popularized, generalized and further developed byJames Clerk Maxwell,Lord Rayleigh,Henri Poincaré,Subrahmanyan Chandrasekhar,Enrico Fermi,Paul Ledoux,Richard BaderandEugene Parker.Fritz Zwickywas the first to use the virial theorem to deduce the existence of unseen matter, which is now calleddark matter.Richard Badershowed that the charge distribution of a total system can be partitioned into its kinetic and potential energies that obey the virial theorem.[3]As another example of its many applications, the virial theorem has been used to derive theChandrasekhar limitfor the stability ofwhite dwarfstars. ConsiderN= 2particles with equal massm, acted upon by mutually attractive forces. Suppose the particles are at diametrically opposite points of a circular orbit with radiusr. The velocities arev1(t)andv2(t) = −v1(t), which are normal to forcesF1(t)andF2(t) = −F1(t). The respective magnitudes are fixed atvandF. The average kinetic energy of the system in an interval of time fromt1tot2is⟨T⟩=1t2−t1∫t1t2∑k=1N12mk|vk(t)|2dt=1t2−t1∫t1t2(12m|v1(t)|2+12m|v2(t)|2)dt=mv2.{\displaystyle \langle T\rangle ={\frac {1}{t_{2}-t_{1}}}\int _{t_{1}}^{t_{2}}\sum _{k=1}^{N}{\frac {1}{2}}m_{k}|\mathbf {v} _{k}(t)|^{2}\,dt={\frac {1}{t_{2}-t_{1}}}\int _{t_{1}}^{t_{2}}\left({\frac {1}{2}}m|\mathbf {v} _{1}(t)|^{2}+{\frac {1}{2}}m|\mathbf {v} _{2}(t)|^{2}\right)\,dt=mv^{2}.}Taking center of mass as the origin, the particles have positionsr1(t)andr2(t) = −r1(t)with fixed magnituder. The attractive forces act in opposite directions as positions, soF1(t) ⋅r1(t) =F2(t) ⋅r2(t) = −Fr. Applying thecentripetal forceformulaF=mv2/rresults in−12∑k=1N⟨Fk⋅rk⟩=−12(−Fr−Fr)=Fr=mv2r⋅r=mv2=⟨T⟩,{\displaystyle -{\frac {1}{2}}\sum _{k=1}^{N}\langle \mathbf {F} _{k}\cdot \mathbf {r} _{k}\rangle =-{\frac {1}{2}}(-Fr-Fr)=Fr={\frac {mv^{2}}{r}}\cdot r=mv^{2}=\langle T\rangle ,}as required. Note: If the origin is displaced, then we'd obtain the same result. This is because the dot product of the displacement with equal and opposite forcesF1(t),F2(t)results in net cancellation. Although the virial theorem depends on averaging the total kinetic and potential energies, the presentation here postpones the averaging to the last step. For a collection ofNpoint particles, thescalarmoment of inertiaIabout theoriginisI=∑k=1Nmk|rk|2=∑k=1Nmkrk2,{\displaystyle I=\sum _{k=1}^{N}m_{k}|\mathbf {r} _{k}|^{2}=\sum _{k=1}^{N}m_{k}r_{k}^{2},}wheremkandrkrepresent the mass and position of thekth particle.rk= |rk|is the position vector magnitude. Consider the scalarG=∑k=1Npk⋅rk,{\displaystyle G=\sum _{k=1}^{N}\mathbf {p} _{k}\cdot \mathbf {r} _{k},}wherepkis themomentumvectorof thekth particle.[4]Assuming that the masses are constant,Gis one-half the time derivative of this moment of inertia:12dIdt=12ddt∑k=1Nmkrk⋅rk=∑k=1Nmkdrkdt⋅rk=∑k=1Npk⋅rk=G.{\displaystyle {\begin{aligned}{\frac {1}{2}}{\frac {dI}{dt}}&={\frac {1}{2}}{\frac {d}{dt}}\sum _{k=1}^{N}m_{k}\mathbf {r} _{k}\cdot \mathbf {r} _{k}\\&=\sum _{k=1}^{N}m_{k}\,{\frac {d\mathbf {r} _{k}}{dt}}\cdot \mathbf {r} _{k}\\&=\sum _{k=1}^{N}\mathbf {p} _{k}\cdot \mathbf {r} _{k}=G.\end{aligned}}}In turn, the time derivative ofGisdGdt=∑k=1Npk⋅drkdt+∑k=1Ndpkdt⋅rk=∑k=1Nmkdrkdt⋅drkdt+∑k=1NFk⋅rk=2T+∑k=1NFk⋅rk,{\displaystyle {\begin{aligned}{\frac {dG}{dt}}&=\sum _{k=1}^{N}\mathbf {p} _{k}\cdot {\frac {d\mathbf {r} _{k}}{dt}}+\sum _{k=1}^{N}{\frac {d\mathbf {p} _{k}}{dt}}\cdot \mathbf {r} _{k}\\&=\sum _{k=1}^{N}m_{k}{\frac {d\mathbf {r} _{k}}{dt}}\cdot {\frac {d\mathbf {r} _{k}}{dt}}+\sum _{k=1}^{N}\mathbf {F} _{k}\cdot \mathbf {r} _{k}\\&=2T+\sum _{k=1}^{N}\mathbf {F} _{k}\cdot \mathbf {r} _{k},\end{aligned}}}wheremkis the mass of thekth particle,Fk=⁠dpk/dt⁠is the net force on that particle, andTis the totalkinetic energyof the system according to thevk=⁠drk/dt⁠velocity of each particle,T=12∑k=1Nmkvk2=12∑k=1Nmkdrkdt⋅drkdt.{\displaystyle T={\frac {1}{2}}\sum _{k=1}^{N}m_{k}v_{k}^{2}={\frac {1}{2}}\sum _{k=1}^{N}m_{k}{\frac {d\mathbf {r} _{k}}{dt}}\cdot {\frac {d\mathbf {r} _{k}}{dt}}.} The total forceFkon particlekis the sum of all the forces from the other particlesjin the system:Fk=∑j=1NFjk,{\displaystyle \mathbf {F} _{k}=\sum _{j=1}^{N}\mathbf {F} _{jk},}whereFjkis the force applied by particlejon particlek. Hence, the virial can be written as−12∑k=1NFk⋅rk=−12∑k=1N∑j=1NFjk⋅rk.{\displaystyle -{\frac {1}{2}}\,\sum _{k=1}^{N}\mathbf {F} _{k}\cdot \mathbf {r} _{k}=-{\frac {1}{2}}\,\sum _{k=1}^{N}\sum _{j=1}^{N}\mathbf {F} _{jk}\cdot \mathbf {r} _{k}.} Since no particle acts on itself (i.e.,Fjj= 0for1 ≤j≤N), we split the sum in terms below and above this diagonal and add them together in pairs:∑k=1NFk⋅rk=∑k=1N∑j=1NFjk⋅rk=∑k=2N∑j=1k−1Fjk⋅rk+∑k=1N−1∑j=k+1NFjk⋅rk=∑k=2N∑j=1k−1Fjk⋅rk+∑j=2N∑k=1j−1Fjk⋅rk=∑k=2N∑j=1k−1(Fjk⋅rk+Fkj⋅rj)=∑k=2N∑j=1k−1(Fjk⋅rk−Fjk⋅rj)=∑k=2N∑j=1k−1Fjk⋅(rk−rj),{\displaystyle {\begin{aligned}\sum _{k=1}^{N}\mathbf {F} _{k}\cdot \mathbf {r} _{k}&=\sum _{k=1}^{N}\sum _{j=1}^{N}\mathbf {F} _{jk}\cdot \mathbf {r} _{k}=\sum _{k=2}^{N}\sum _{j=1}^{k-1}\mathbf {F} _{jk}\cdot \mathbf {r} _{k}+\sum _{k=1}^{N-1}\sum _{j=k+1}^{N}\mathbf {F} _{jk}\cdot \mathbf {r} _{k}\\&=\sum _{k=2}^{N}\sum _{j=1}^{k-1}\mathbf {F} _{jk}\cdot \mathbf {r} _{k}+\sum _{j=2}^{N}\sum _{k=1}^{j-1}\mathbf {F} _{jk}\cdot \mathbf {r} _{k}=\sum _{k=2}^{N}\sum _{j=1}^{k-1}(\mathbf {F} _{jk}\cdot \mathbf {r} _{k}+\mathbf {F} _{kj}\cdot \mathbf {r} _{j})\\&=\sum _{k=2}^{N}\sum _{j=1}^{k-1}(\mathbf {F} _{jk}\cdot \mathbf {r} _{k}-\mathbf {F} _{jk}\cdot \mathbf {r} _{j})=\sum _{k=2}^{N}\sum _{j=1}^{k-1}\mathbf {F} _{jk}\cdot (\mathbf {r} _{k}-\mathbf {r} _{j}),\end{aligned}}}where we have usedNewton's third law of motion, i.e.,Fjk= −Fkj(equal and opposite reaction). It often happens that the forces can be derived from a potential energyVjkthat is a function only of the distancerjkbetween the point particlesjandk. Since the force is the negative gradient of the potential energy, we have in this caseFjk=−∇rkVjk=−dVjkdrjk(rk−rjrjk),{\displaystyle \mathbf {F} _{jk}=-\nabla _{\mathbf {r} _{k}}V_{jk}=-{\frac {dV_{jk}}{dr_{jk}}}\left({\frac {\mathbf {r} _{k}-\mathbf {r} _{j}}{r_{jk}}}\right),}which is equal and opposite toFkj= −∇rjVkj= −∇rjVjk, the force applied by particlekon particlej, as may be confirmed by explicit calculation. Hence,∑k=1NFk⋅rk=∑k=2N∑j=1k−1Fjk⋅(rk−rj)=−∑k=2N∑j=1k−1dVjkdrjk|rk−rj|2rjk=−∑k=2N∑j=1k−1dVjkdrjkrjk.{\displaystyle {\begin{aligned}\sum _{k=1}^{N}\mathbf {F} _{k}\cdot \mathbf {r} _{k}&=\sum _{k=2}^{N}\sum _{j=1}^{k-1}\mathbf {F} _{jk}\cdot (\mathbf {r} _{k}-\mathbf {r} _{j})\\&=-\sum _{k=2}^{N}\sum _{j=1}^{k-1}{\frac {dV_{jk}}{dr_{jk}}}{\frac {|\mathbf {r} _{k}-\mathbf {r} _{j}|^{2}}{r_{jk}}}\\&=-\sum _{k=2}^{N}\sum _{j=1}^{k-1}{\frac {dV_{jk}}{dr_{jk}}}r_{jk}.\end{aligned}}} ThusdGdt=2T+∑k=1NFk⋅rk=2T−∑k=2N∑j=1k−1dVjkdrjkrjk.{\displaystyle {\frac {dG}{dt}}=2T+\sum _{k=1}^{N}\mathbf {F} _{k}\cdot \mathbf {r} _{k}=2T-\sum _{k=2}^{N}\sum _{j=1}^{k-1}{\frac {dV_{jk}}{dr_{jk}}}r_{jk}.} In a common special case, the potential energyVbetween two particles is proportional to a powernof their distancerij:Vjk=αrjkn,{\displaystyle V_{jk}=\alpha r_{jk}^{n},}where the coefficientαand the exponentnare constants. In such cases, the virial is−12∑k=1NFk⋅rk=12∑k=1N∑j<kdVjkdrjkrjk=12∑k=1N∑j<knαrjkn−1rjk=12∑k=1N∑j<knVjk=n2VTOT,{\displaystyle {\begin{aligned}-{\frac {1}{2}}\,\sum _{k=1}^{N}\mathbf {F} _{k}\cdot \mathbf {r} _{k}&={\frac {1}{2}}\,\sum _{k=1}^{N}\sum _{j<k}{\frac {dV_{jk}}{dr_{jk}}}r_{jk}\\&={\frac {1}{2}}\,\sum _{k=1}^{N}\sum _{j<k}n\alpha r_{jk}^{n-1}r_{jk}\\&={\frac {1}{2}}\,\sum _{k=1}^{N}\sum _{j<k}nV_{jk}={\frac {n}{2}}\,V_{\text{TOT}},\end{aligned}}}whereVTOT=∑k=1N∑j<kVjk{\displaystyle V_{\text{TOT}}=\sum _{k=1}^{N}\sum _{j<k}V_{jk}}is the total potential energy of the system. ThusdGdt=2T+∑k=1NFk⋅rk=2T−nVTOT.{\displaystyle {\frac {dG}{dt}}=2T+\sum _{k=1}^{N}\mathbf {F} _{k}\cdot \mathbf {r} _{k}=2T-nV_{\text{TOT}}.} For gravitating systems the exponentnequals −1, givingLagrange's identitydGdt=12d2Idt2=2T+VTOT,{\displaystyle {\frac {dG}{dt}}={\frac {1}{2}}{\frac {d^{2}I}{dt^{2}}}=2T+V_{\text{TOT}},}which was derived byJoseph-Louis Lagrangeand extended byCarl Jacobi. The average of this derivative over a durationτis defined as⟨dGdt⟩τ=1τ∫0τdGdtdt=1τ∫G(0)G(τ)dG=G(τ)−G(0)τ,{\displaystyle \left\langle {\frac {dG}{dt}}\right\rangle _{\tau }={\frac {1}{\tau }}\int _{0}^{\tau }{\frac {dG}{dt}}\,dt={\frac {1}{\tau }}\int _{G(0)}^{G(\tau )}\,dG={\frac {G(\tau )-G(0)}{\tau }},}from which we obtain the exact equation⟨dGdt⟩τ=2⟨T⟩τ+∑k=1N⟨Fk⋅rk⟩τ.{\displaystyle \left\langle {\frac {dG}{dt}}\right\rangle _{\tau }=2\langle T\rangle _{\tau }+\sum _{k=1}^{N}\langle \mathbf {F} _{k}\cdot \mathbf {r} _{k}\rangle _{\tau }.} Thevirial theoremstates that if⟨dG/dt⟩τ= 0, then2⟨T⟩τ=−∑k=1N⟨Fk⋅rk⟩τ.{\displaystyle 2\langle T\rangle _{\tau }=-\sum _{k=1}^{N}\langle \mathbf {F} _{k}\cdot \mathbf {r} _{k}\rangle _{\tau }.} There are many reasons why the average of the time derivative might vanish. One often-cited reason applies to stably bound systems, that is, to systems that hang together forever and whose parameters are finite. In this case, velocities and coordinates of the particles of the system have upper and lower limits, so thatGboundis bounded between two extremes,GminandGmax, and the average goes to zero in the limit of infiniteτ:limτ→∞|⟨dGbounddt⟩τ|=limτ→∞|G(τ)−G(0)τ|≤limτ→∞Gmax−Gminτ=0.{\displaystyle \lim _{\tau \to \infty }\left|\left\langle {\frac {dG^{\text{bound}}}{dt}}\right\rangle _{\tau }\right|=\lim _{\tau \to \infty }\left|{\frac {G(\tau )-G(0)}{\tau }}\right|\leq \lim _{\tau \to \infty }{\frac {G_{\max }-G_{\min }}{\tau }}=0.} Even if the average of the time derivative ofGis only approximately zero, the virial theorem holds to the same degree of approximation. For power-law forces with an exponentn, the general equation holds:⟨T⟩τ=−12∑k=1N⟨Fk⋅rk⟩τ=n2⟨VTOT⟩τ.{\displaystyle \langle T\rangle _{\tau }=-{\frac {1}{2}}\sum _{k=1}^{N}\langle \mathbf {F} _{k}\cdot \mathbf {r} _{k}\rangle _{\tau }={\frac {n}{2}}\langle V_{\text{TOT}}\rangle _{\tau }.} Forgravitationalattraction,n= −1, and the average kinetic energy equals half of the average negative potential energy:⟨T⟩τ=−12⟨VTOT⟩τ.{\displaystyle \langle T\rangle _{\tau }=-{\frac {1}{2}}\langle V_{\text{TOT}}\rangle _{\tau }.} This general result is useful for complex gravitating systems such asplanetary systemsorgalaxies. A simple application of the virial theorem concernsgalaxy clusters. If a region of space is unusually full of galaxies, it is safe to assume that they have been together for a long time, and the virial theorem can be applied.Doppler effectmeasurements give lower bounds for their relative velocities, and the virial theorem gives a lower bound for the total mass of the cluster, including any dark matter. If theergodic hypothesisholds for the system under consideration, the averaging need not be taken over time; anensemble averagecan also be taken, with equivalent results. Although originally derived for classical mechanics, the virial theorem also holds for quantum mechanics, as first shown byVladimir Fock[5]using theEhrenfest theorem. Evaluate thecommutatorof theHamiltonianH=V({Xi})+∑nPn22mn{\displaystyle H=V{\bigl (}\{X_{i}\}{\bigr )}+\sum _{n}{\frac {P_{n}^{2}}{2m_{n}}}}with the position operatorXnand the momentum operatorPn=−iℏddXn{\displaystyle P_{n}=-i\hbar {\frac {d}{dX_{n}}}}of particlen,[H,XnPn]=Xn[H,Pn]+[H,Xn]Pn=iℏXndVdXn−iℏPn2mn.{\displaystyle [H,X_{n}P_{n}]=X_{n}[H,P_{n}]+[H,X_{n}]P_{n}=i\hbar X_{n}{\frac {dV}{dX_{n}}}-i\hbar {\frac {P_{n}^{2}}{m_{n}}}.} Summing over all particles, one finds that forQ=∑nXnPn{\displaystyle Q=\sum _{n}X_{n}P_{n}}the commutator isiℏ[H,Q]=2T−∑nXndVdXn,{\displaystyle {\frac {i}{\hbar }}[H,Q]=2T-\sum _{n}X_{n}{\frac {dV}{dX_{n}}},}whereT=∑nPn2/2mn{\textstyle T=\sum _{n}P_{n}^{2}/2m_{n}}is the kinetic energy. The left-hand side of this equation is justdQ/dt, according to theHeisenberg equationof motion. The expectation value⟨dQ/dt⟩of this time derivative vanishes in a stationary state, leading to thequantum virial theorem:2⟨T⟩=∑n⟨XndVdXn⟩.{\displaystyle 2\langle T\rangle =\sum _{n}\left\langle X_{n}{\frac {dV}{dX_{n}}}\right\rangle .} In the field of quantum mechanics, there exists another form of the virial theorem, applicable to localized solutions to the stationarynonlinear Schrödinger equationorKlein–Gordon equation, isPokhozhaev's identity,[6]also known asDerrick's theorem. Letg(s){\displaystyle g(s)}be continuous and real-valued, withg(0)=0{\displaystyle g(0)=0}. DenoteG(s)=∫0sg(t)dt{\textstyle G(s)=\int _{0}^{s}g(t)\,dt}. Letu∈Lloc∞(Rn),∇u∈L2(Rn),G(u(⋅))∈L1(Rn),n∈N{\displaystyle u\in L_{\text{loc}}^{\infty }(\mathbb {R} ^{n}),\quad \nabla u\in L^{2}(\mathbb {R} ^{n}),\quad G(u(\cdot ))\in L^{1}(\mathbb {R} ^{n}),\quad n\in \mathbb {N} }be a solution to the equation−∇2u=g(u),{\displaystyle -\nabla ^{2}u=g(u),}in the sense ofdistributions. Thenu{\displaystyle u}satisfies the relation(n−22)∫Rn|∇u(x)|2dx=n∫RnG(u(x))dx.{\displaystyle \left({\frac {n-2}{2}}\right)\int _{\mathbb {R} ^{n}}|\nabla u(x)|^{2}\,dx=n\int _{\mathbb {R} ^{n}}G{\big (}u(x){\big )}\,dx.} For a single particle in special relativity, it is not the case thatT=⁠1/2⁠p·v. Instead, it is true thatT= (γ− 1)mc2, whereγis theLorentz factor γ=11−v2c2,{\displaystyle \gamma ={\frac {1}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}},}andβ=⁠v/c⁠. We have12p⋅v=12βγmc⋅βc=12γβ2mc2=(γβ22(γ−1))T.{\displaystyle {\begin{aligned}{\frac {1}{2}}\mathbf {p} \cdot \mathbf {v} &={\frac {1}{2}}{\boldsymbol {\beta }}\gamma mc\cdot {\boldsymbol {\beta }}c\\&={\frac {1}{2}}\gamma \beta ^{2}mc^{2}\\[5pt]&=\left({\frac {\gamma \beta ^{2}}{2(\gamma -1)}}\right)T.\end{aligned}}}The last expression can be simplified to(1+1−β22)T=(γ+12γ)T.{\displaystyle \left({\frac {1+{\sqrt {1-\beta ^{2}}}}{2}}\right)T=\left({\frac {\gamma +1}{2\gamma }}\right)T.}Thus, under the conditions described in earlier sections (includingNewton's third law of motion,Fjk= −Fkj, despite relativity), the time average forNparticles with a power law potential isn2⟨VTOT⟩τ=⟨∑k=1N(1+1−βk22)Tk⟩τ=⟨∑k=1N(γk+12γk)Tk⟩τ.{\displaystyle {\frac {n}{2}}\left\langle V_{\text{TOT}}\right\rangle _{\tau }=\left\langle \sum _{k=1}^{N}\left({\tfrac {1+{\sqrt {1-\beta _{k}^{2}}}}{2}}\right)T_{k}\right\rangle _{\tau }=\left\langle \sum _{k=1}^{N}\left({\frac {\gamma _{k}+1}{2\gamma _{k}}}\right)T_{k}\right\rangle _{\tau }.}In particular, the ratio of kinetic energy to potential energy is no longer fixed, but necessarily falls into an interval:2⟨TTOT⟩n⟨VTOT⟩∈[1,2],{\displaystyle {\frac {2\langle T_{\text{TOT}}\rangle }{n\langle V_{\text{TOT}}\rangle }}\in [1,2],}where the more relativistic systems exhibit the larger ratios. The virial theorem has a particularly simple form for periodic motion. It can be used to perform perturbative calculation for nonlinear oscillators.[7] It can also be used to study motion in acentral potential.[4]If the central potential is of the formU∝rn{\displaystyle U\propto r^{n}}, the virial theorem simplifies to⟨T⟩=n2⟨U⟩{\displaystyle \langle T\rangle ={\frac {n}{2}}\langle U\rangle }.[citation needed]In particular, for gravitational or electrostatic (Coulomb) attraction,⟨T⟩=−12⟨U⟩{\displaystyle \langle T\rangle =-{\frac {1}{2}}\langle U\rangle }. Analysis based on Sivardiere, 1986.[7]For a one-dimensional oscillator with massm{\displaystyle m}, positionx{\displaystyle x}, driving forceFcos⁡(ωt){\displaystyle F\cos(\omega t)}, spring constantk{\displaystyle k}, and damping coefficientγ{\displaystyle \gamma }, the equation of motion ismd2xdt2⏟acceleration=−kxdd⏟spring−γdxdt⏟friction+Fcos⁡(ωt)dd⏟external driving.{\displaystyle m\underbrace {\frac {d^{2}x}{dt^{2}}} _{\text{acceleration}}=\underbrace {-kx{\vphantom {\frac {d}{d}}}} _{\text{spring}}\ \underbrace {-\ \gamma {\frac {dx}{dt}}} _{\text{friction}}\ \underbrace {+\ F\cos(\omega t){\vphantom {\frac {d}{d}}}} _{\text{external driving}}.} When the oscillator has reached a steady state, it performs a stable oscillationx=Xcos⁡(ωt+φ){\displaystyle x=X\cos(\omega t+\varphi )}, whereX{\displaystyle X}is the amplitude, andφ{\displaystyle \varphi }is the phase angle. Applying the virial theorem, we havem⟨x˙x˙⟩=k⟨xx⟩+γ⟨xx˙⟩−F⟨cos⁡(ωt)x⟩{\displaystyle m\langle {\dot {x}}{\dot {x}}\rangle =k\langle xx\rangle +\gamma \langle x{\dot {x}}\rangle -F\langle \cos(\omega t)x\rangle }, which simplifies toFcos⁡(φ)=m(ω02−ω2)X{\displaystyle F\cos(\varphi )=m(\omega _{0}^{2}-\omega ^{2})X}, whereω0=k/m{\displaystyle \omega _{0}={\sqrt {k/m}}}is the natural frequency of the oscillator. To solve the two unknowns, we need another equation. In steady state, the power lost per cycle is equal to the power gained per cycle:⟨x˙γx˙⟩⏟power dissipated=⟨x˙Fcos⁡ωt⟩⏟power input,{\displaystyle \underbrace {\langle {\dot {x}}\,\gamma {\dot {x}}\rangle } _{\text{power dissipated}}=\underbrace {\langle {\dot {x}}\,F\cos \omega t\rangle } _{\text{power input}},}which simplifies tosin⁡φ=−γXωF{\displaystyle \sin \varphi =-{\frac {\gamma X\omega }{F}}}. Now we have two equations that yield the solution{X=F2γ2ω2+m2(ω02−ω2)2,tan⁡φ=−γωm(ω02−ω2).{\displaystyle {\begin{cases}X={\sqrt {\dfrac {F^{2}}{\gamma ^{2}\omega ^{2}+m^{2}(\omega _{0}^{2}-\omega ^{2})^{2}}}},\\\tan \varphi =-{\dfrac {\gamma \omega }{m(\omega _{0}^{2}-\omega ^{2})}}.\end{cases}}} Consider a container filled with an ideal gas consisting of point masses. The only forces applied to the point masses are due to the container walls. In this case, the expression in the virial theorem equals⟨∑iFi⋅ri⟩=−P∮n^⋅rdA,{\displaystyle {\Big \langle }\sum _{i}\mathbf {F} _{i}\cdot \mathbf {r} _{i}{\Big \rangle }=-P\oint {\hat {\mathbf {n} }}\cdot \mathbf {r} \,dA,}since, by definition, the pressurePis the average force per area exerted by the gas upon the walls, which is normal to the wall. There is a minus sign becausen^{\displaystyle {\hat {\mathbf {n} }}}is the unit normal vector pointing outwards, and the force to be used is the one upon the particles by the wall. Then the virial theorem states that⟨T⟩=P2∮n^⋅rdA.{\displaystyle \langle T\rangle ={\frac {P}{2}}\oint {\hat {\mathbf {n} }}\cdot \mathbf {r} \,dA.}By thedivergence theorem,∮n^⋅rdA=∫∇⋅rdV=3∫dV=3V{\textstyle \oint {\hat {\mathbf {n} }}\cdot \mathbf {r} \,dA=\int \nabla \cdot \mathbf {r} \,dV=3\int dV=3V}. Fromequipartition, the average total kinetic energy⟨T⟩=N⟨12mv2⟩=N⋅32kT{\textstyle \langle T\rangle =N{\big \langle }{\frac {1}{2}}mv^{2}{\big \rangle }=N\cdot {\frac {3}{2}}kT}. Hence,PV=NkT{\displaystyle PV=NkT}, theideal gas law.[8] In 1933, Fritz Zwicky applied the virial theorem to estimate the mass ofComa Cluster, and discovered a discrepancy of mass of about 450, which he explained as due to "dark matter".[9]He refined the analysis in 1937, finding a discrepancy of about 500.[10][11] He approximated the Coma cluster as a spherical "gas" ofN{\displaystyle N}stars of roughly equal massm{\displaystyle m}, which gives⟨T⟩=12Nm⟨v2⟩{\textstyle \langle T\rangle ={\frac {1}{2}}Nm\langle v^{2}\rangle }. The total gravitational potential energy of the cluster isU=−∑i<jGm2ri,j{\displaystyle U=-\sum _{i<j}{\frac {Gm^{2}}{r_{i,j}}}}, giving⟨U⟩=−Gm2∑i<j⟨1/ri,j⟩{\textstyle \langle U\rangle =-Gm^{2}\sum _{i<j}\langle {1}/{r_{i,j}}\rangle }. Assuming the motion of the stars are all the same over a long enough time (ergodicity),⟨U⟩=−12N2Gm2⟨1/r⟩{\textstyle \langle U\rangle =-{\frac {1}{2}}N^{2}Gm^{2}\langle {1}/{r}\rangle }. Zwicky estimated⟨U⟩{\displaystyle \langle U\rangle }as the gravitational potential of a uniform ball of constant density, giving⟨U⟩=−35GN2m2R{\textstyle \langle U\rangle =-{\frac {3}{5}}{\frac {GN^{2}m^{2}}{R}}}. So by the virial theorem, the total mass of the cluster isNm=5⟨v2⟩3G⟨1r⟩{\displaystyle Nm={\frac {5\langle v^{2}\rangle }{3G\langle {\frac {1}{r}}\rangle }}} Zwicky1933{\displaystyle _{1933}}[9]estimated that there areN=800{\displaystyle N=800}galaxies in the cluster, each having observed stellar massm=109M⊙{\displaystyle m=10^{9}M_{\odot }}(suggested by Hubble), and the cluster has radiusR=106ly{\displaystyle R=10^{6}{\text{ly}}}. He also measured the radial velocities of the galaxies by doppler shifts in galactic spectra to be⟨vr2⟩=(1000km/s)2{\displaystyle \langle v_{r}^{2}\rangle =(1000{\text{km/s}})^{2}}. Assumingequipartitionof kinetic energy,⟨v2⟩=3⟨vr2⟩{\displaystyle \langle v^{2}\rangle =3\langle v_{r}^{2}\rangle }. By the virial theorem, the total mass of the cluster should be5R⟨vr2⟩G≈3.6×1014M⊙{\displaystyle {\frac {5R\langle v_{r}^{2}\rangle }{G}}\approx 3.6\times 10^{14}M_{\odot }}. However, the observed mass isNm=8×1011M⊙{\displaystyle Nm=8\times 10^{11}M_{\odot }}, meaning the total mass is 450 times that of observed mass. Lord Rayleigh published a generalization of the virial theorem in 1900,[12]which was partially reprinted in 1903.[13]Henri Poincaréproved and applied a form of the virial theorem in 1911 to the problem of formation of the Solar System from a proto-stellar cloud (then known ascosmogony).[14]A variational form of the virial theorem was developed in 1945 by Ledoux.[15]Atensorform of the virial theorem was developed by Parker,[16]Chandrasekhar[17]and Fermi.[18]The following generalization of the virial theorem has been established by Pollard in 1964 for the case of the inverse square law:[19][20][failed verification]2limτ→+∞⟨T⟩τ=limτ→+∞⟨U⟩τif and only iflimτ→+∞τ−2I(τ)=0.{\displaystyle 2\lim _{\tau \to +\infty }\langle T\rangle _{\tau }=\lim _{\tau \to +\infty }\langle U\rangle _{\tau }\quad {\text{if and only if}}\quad \lim _{\tau \to +\infty }{\tau }^{-2}I(\tau )=0.}Aboundaryterm otherwise must be added.[21] The virial theorem can be extended to include electric and magnetic fields. The result is[22] 12d2Idt2+∫Vxk∂Gk∂td3r=2(T+U)+WE+WM−∫xk(pik+Tik)dSi,{\displaystyle {\frac {1}{2}}{\frac {d^{2}I}{dt^{2}}}+\int _{V}x_{k}{\frac {\partial G_{k}}{\partial t}}\,d^{3}r=2(T+U)+W^{\mathrm {E} }+W^{\mathrm {M} }-\int x_{k}(p_{ik}+T_{ik})\,dS_{i},} whereIis themoment of inertia,Gis themomentum density of the electromagnetic field,Tis thekinetic energyof the "fluid",Uis the random "thermal" energy of the particles,WEandWMare the electric and magnetic energy content of the volume considered. Finally,pikis the fluid-pressure tensor expressed in the local moving coordinate system pik=Σnσmσ⟨vivk⟩σ−ViVkΣmσnσ,{\displaystyle p_{ik}=\Sigma n^{\sigma }m^{\sigma }\langle v_{i}v_{k}\rangle ^{\sigma }-V_{i}V_{k}\Sigma m^{\sigma }n^{\sigma },} andTikis theelectromagnetic stress tensor, Tik=(ε0E22+B22μ0)δik−(ε0EiEk+BiBkμ0).{\displaystyle T_{ik}=\left({\frac {\varepsilon _{0}E^{2}}{2}}+{\frac {B^{2}}{2\mu _{0}}}\right)\delta _{ik}-\left(\varepsilon _{0}E_{i}E_{k}+{\frac {B_{i}B_{k}}{\mu _{0}}}\right).} Aplasmoidis a finite configuration of magnetic fields and plasma. With the virial theorem it is easy to see that any such configuration will expand if not contained by external forces. In a finite configuration without pressure-bearing walls or magnetic coils, the surface integral will vanish. Since all the other terms on the right hand side are positive, the acceleration of the moment of inertia will also be positive. It is also easy to estimate the expansion timeτ. If a total massMis confined within a radiusR, then the moment of inertia is roughlyMR2, and the left hand side of the virial theorem is⁠MR2/τ2⁠. The terms on the right hand side add up to aboutpR3, wherepis the larger of the plasma pressure or the magnetic pressure. Equating these two terms and solving forτ, we find τ∼Rcs,{\displaystyle \tau \,\sim {\frac {R}{c_{\mathrm {s} }}},} wherecsis the speed of theion acoustic wave(or theAlfvén wave, if the magnetic pressure is higher than the plasma pressure). Thus the lifetime of a plasmoid is expected to be on the order of the acoustic (or Alfvén) transit time. In case when in the physical system the pressure field, the electromagnetic and gravitational fields are taken into account, as well as the field of particles’ acceleration, the virial theorem is written in the relativistic form as follows:[23] ⟨Wk⟩≈−0.6∑k=1N⟨Fk⋅rk⟩,{\displaystyle \left\langle W_{k}\right\rangle \approx -0.6\sum _{k=1}^{N}\langle \mathbf {F} _{k}\cdot \mathbf {r} _{k}\rangle ,} where the valueWk≈γcTexceeds the kinetic energy of the particlesTby a factor equal to the Lorentz factorγcof the particles at the center of the system. Under normal conditions we can assume thatγc≈ 1, then we can see that in the virial theorem the kinetic energy is related to the potential energy not by the coefficient⁠1/2⁠, but rather by the coefficient close to 0.6. The difference from the classical case arises due to considering the pressure field and the field of particles’ acceleration inside the system, while the derivative of the scalarGis not equal to zero and should be considered as thematerial derivative. An analysis of the integral theorem of generalized virial makes it possible to find, on the basis of field theory, a formula for the root-mean-square speed of typical particles of a system without using the notion of temperature:[24] vrms=c1−4πηρ0r2c2γc2sin2⁡(rc4πηρ0),{\displaystyle v_{\mathrm {rms} }=c{\sqrt {1-{\frac {4\pi \eta \rho _{0}r^{2}}{c^{2}\gamma _{c}^{2}\sin ^{2}\left({\frac {r}{c}}{\sqrt {4\pi \eta \rho _{0}}}\right)}}}},} wherec{\displaystyle ~c}is the speed of light,η{\displaystyle ~\eta }is the acceleration field constant,ρ0{\displaystyle ~\rho _{0}}is the mass density of particles,r{\displaystyle ~r}is the current radius. Unlike the virial theorem for particles, for the electromagnetic field the virial theorem is written as follows:[25]Ekf+2Wf=0,{\displaystyle ~E_{kf}+2W_{f}=0,}where the energyEkf=∫Aαjα−gdx1dx2dx3{\textstyle ~E_{kf}=\int A_{\alpha }j^{\alpha }{\sqrt {-g}}\,dx^{1}\,dx^{2}\,dx^{3}}considered as the kinetic field energy associated with four-currentjα{\displaystyle j^{\alpha }}, andWf=14μ0∫FαβFαβ−gdx1dx2dx3{\displaystyle ~W_{f}={\frac {1}{4\mu _{0}}}\int F_{\alpha \beta }F^{\alpha \beta }{\sqrt {-g}}\,dx^{1}\,dx^{2}\,dx^{3}}sets the potential field energy found through the components of the electromagnetic tensor. The virial theorem is frequently applied in astrophysics, especially relating thegravitational potential energyof a system to itskineticorthermal energy. Some common virial relations are[citation needed]35GMR=32kBTmp=12v2{\displaystyle {\frac {3}{5}}{\frac {GM}{R}}={\frac {3}{2}}{\frac {k_{\mathrm {B} }T}{m_{\mathrm {p} }}}={\frac {1}{2}}v^{2}}for a massM, radiusR, velocityv, and temperatureT. The constants areNewton's constantG, theBoltzmann constantkB, and proton massmp. Note that these relations are only approximate, and often the leading numerical factors (e.g.⁠3/5⁠or⁠1/2⁠) are neglected entirely. Inastronomy, the mass and size of a galaxy (or general overdensity) is often defined in terms of the "virial mass" and "virial radius" respectively. Because galaxies and overdensities in continuous fluids can be highly extended (even to infinity in some models, such as anisothermal sphere), it can be hard to define specific, finite measures of their mass and size. The virial theorem, and related concepts, provide an often convenient means by which to quantify these properties. In galaxy dynamics, the mass of a galaxy is often inferred by measuring therotation velocityof its gas and stars, assumingcircular Keplerian orbits. Using the virial theorem, thevelocity dispersionσcan be used in a similar way. Taking the kinetic energy (per particle) of the system asT=⁠1/2⁠v2~⁠3/2⁠σ2, and the potential energy (per particle) asU~⁠3/5⁠⁠GM/R⁠we can write GMR≈σ2.{\displaystyle {\frac {GM}{R}}\approx \sigma ^{2}.} HereR{\displaystyle R}is the radius at which the velocity dispersion is being measured, andMis the mass within that radius. The virial mass and radius are generally defined for the radius at which the velocity dispersion is a maximum, i.e. GMvirRvir≈σmax2.{\displaystyle {\frac {GM_{\text{vir}}}{R_{\text{vir}}}}\approx \sigma _{\max }^{2}.} As numerous approximations have been made, in addition to the approximate nature of these definitions, order-unity proportionality constants are often omitted (as in the above equations). These relations are thus only accurate in anorder of magnitudesense, or when used self-consistently. An alternate definition of the virial mass and radius is often used in cosmology where it is used to refer to the radius of a sphere, centered on agalaxyor agalaxy cluster, within which virial equilibrium holds. Since this radius is difficult to determine observationally, it is often approximated as the radius within which the average density is greater, by a specified factor, than thecritical densityρcrit=3H28πG{\displaystyle \rho _{\text{crit}}={\frac {3H^{2}}{8\pi G}}}whereHis theHubble parameterandGis thegravitational constant. A common choice for the factor is 200, which corresponds roughly to the typical over-density in spherical top-hat collapse (seeVirial mass), in which case the virial radius is approximated asrvir≈r200=r,ρ=200⋅ρcrit.{\displaystyle r_{\text{vir}}\approx r_{200}=r,\qquad \rho =200\cdot \rho _{\text{crit}}.}The virial mass is then defined relative to this radius asMvir≈M200=43πr2003⋅200ρcrit.{\displaystyle M_{\text{vir}}\approx M_{200}={\frac {4}{3}}\pi r_{200}^{3}\cdot 200\rho _{\text{crit}}.} The virial theorem is applicable to the cores of stars, by establishing a relation between gravitational potential energy and thermal kinetic energy (i.e. temperature). As stars on themain sequenceconvert hydrogen into helium in their cores, the mean molecular weight of the core increases and it must contract to maintain enough pressure to support its own weight. This contraction decreases its potential energy and, the virial theorem states, increases its thermal energy. The core temperature increases even as energy is lost, effectively a negativespecific heat.[26]This continues beyond the main sequence, unless the core becomes degenerate since that causes the pressure to become independent of temperature and the virial relation withnequals −1 no longer holds.[27]
https://en.wikipedia.org/wiki/Virial_theorem
TheWidom insertion methodis astatistical thermodynamicapproach to the calculation of material and mixture properties. It is named forBenjamin Widom, who derived it in 1963.[1]In general, there are two theoretical approaches to determining the statistical mechanical properties of materials. The first is the direct calculation of the overallpartition functionof the system, which directly yields the system free energy. The second approach, known as the Widom insertion method, instead derives from calculations centering on one molecule. The Widom insertion method directly yields the chemical potential of one component rather than the system free energy. This approach is most widely applied in molecular computer simulations[2][3]but has also been applied in the development of analytical statistical mechanical models. The Widom insertion method can be understood as an application of theJarzynski equalitysince it measures the excess free energy difference via the average work needed to perform, when changing the system from a state with N molecules to a state with N+1 molecules.[4]Therefore it measures the excess chemical potential sinceμexcess=ΔFexcessΔN{\displaystyle \mu _{\text{excess}}={\frac {\Delta F_{\text{excess}}}{\Delta N}}}, whereΔN=1{\displaystyle \Delta N=1}. As originally formulated byBenjamin Widomin 1963,[1]the approach can be summarized by the equation: whereBi{\displaystyle \mathbf {B} _{i}}is called theinsertion parameter,ρi{\displaystyle \rho _{i}}is the number density of speciesi{\displaystyle i},ai{\displaystyle a_{i}}is theactivityof speciesi{\displaystyle i},kB{\displaystyle k_{B}}is theBoltzmann constant, andT{\displaystyle T}is temperature, andψ{\displaystyle \psi }is the interaction energy of an inserted particle with all other particles in the system. The average is over all possible insertions. This can be understood conceptually as fixing the location of all molecules in the system and then inserting a particle of speciesi{\displaystyle i}at all locations through the system, averaging over aBoltzmann factorin its interaction energy over all of those locations. Note that in other ensembles like for example in the semi-grand canonical ensemble the Widom insertion method works with modified formulas.[5] From the above equation and from the definition of activity, the insertion parameter may be related to thechemical potentialby The pressure-temperature-density relation, orequation of stateof a mixture is related to the insertion parameter via whereZ{\displaystyle Z}is thecompressibility factor,ρ{\displaystyle \rho }is the overall number density of the mixture, andln⁡B{\displaystyle \ln \mathbf {B} }is a mole-fraction weighted average over all mixture components: In the case of a 'hard core' repulsive model in which each molecule or atom consists of a hard core with an infinite repulsive potential, insertions in which two molecules occupy the same space will not contribute to the average. In this case the insertion parameter becomes wherePins,i{\displaystyle \mathbf {P} _{ins,i}}is the probability that the randomly inserted molecule of speciesi{\displaystyle i}will experience an attractive or zero net interaction; in other words, it is the probability that the inserted molecule does not 'overlap' with any other molecules. The above is simplified further via the application of themean field approximation, which essentially ignores fluctuations and treats all quantities by their average value. Within this framework the insertion factor is given as
https://en.wikipedia.org/wiki/Widom_insertion_method
In numerical analysis andcomputational statistics,rejection samplingis a basic technique used to generate observations from adistribution. It is also commonly called theacceptance-rejection methodor "accept-reject algorithm" and is a type of exact simulation method. The method works for any distribution inRm{\displaystyle \mathbb {R} ^{m}}with adensity. Rejection sampling is based on the observation that to sample arandom variablein one dimension, one can perform a uniformly random sampling of the two-dimensional Cartesian graph, and keep the samples in the region under the graph of its density function.[1][2][3]Note that this property can be extended toN-dimension functions. To visualize the motivation behind rejection sampling, imagine graphing theprobability density function(PDF) of a random variable onto a large rectangular board and throwing darts at it. Assume that the darts are uniformly distributed around the board. Now remove all of the darts that are outside the area under the curve. The remaining darts will be distributed uniformly within the area under the curve, and thex{\displaystyle x}‑positions of these darts will be distributed according to the random variable's density. This is because there is the most room for the darts to land where the curve is highest and thus the probability density is greatest. The visualization just described is equivalent to a particular form of rejection sampling where the "proposal distribution" is uniform. Hence its graph is a rectangle. The general form of rejection sampling assumes that the board is not necessarily rectangular but is shaped according to the density of some proposal distribution (not necessarily normalized to1{\displaystyle 1}) that we know how to sample from (for example, usinginversion sampling). Its shape must be at least as high at every point as the distribution we want to sample from, so that the former completely encloses the latter. Otherwise, there would be parts of the curved area we want to sample from that could never be reached. Rejection sampling works as follows: This algorithm can be used to sample from the area under any curve, regardless of whether the function integrates to 1. In fact, scaling a function by a constant has no effect on the sampledx{\displaystyle x}‑positions. Thus, the algorithm can be used to sample from a distribution whosenormalizing constantis unknown, which is common incomputational statistics. The rejection sampling method generates sampling values from a target distributionX{\displaystyle X}with an arbitraryprobability density functionf(x){\displaystyle f(x)}by using a proposal distributionY{\displaystyle Y}with probability densityg(x){\displaystyle g(x)}. The idea is that one can generate a sample value fromX{\displaystyle X}by instead sampling fromY{\displaystyle Y}and accepting the sample fromY{\displaystyle Y}with probabilityf(x)/(Mg(x)){\displaystyle f(x)/(Mg(x))}, repeating the draws fromY{\displaystyle Y}until a value is accepted.M{\displaystyle M}here is a constant, finite bound on the likelihood ratiof(x)/g(x){\displaystyle f(x)/g(x)}, satisfyingM<∞{\displaystyle M<\infty }over thesupportofX{\displaystyle X}; in other words, M must satisfyf(x)≤Mg(x){\displaystyle f(x)\leq Mg(x)}for all values ofx{\displaystyle x}. Note that this requires that the support ofY{\displaystyle Y}must include the support ofX{\displaystyle X}—in other words,g(x)>0{\displaystyle g(x)>0}wheneverf(x)>0{\displaystyle f(x)>0}. The validation of this method is the envelope principle: when simulating the pair(x,v=u⋅Mg(x)){\textstyle (x,v=u\cdot Mg(x))}, one produces a uniform simulation over the subgraph ofMg(x){\textstyle Mg(x)}. Accepting only pairs such thatu<f(x)/(Mg(x)){\textstyle u<f(x)/(Mg(x))}then produces pairs(x,v){\displaystyle (x,v)}uniformly distributed over the subgraph off(x){\displaystyle f(x)}and thus, marginally, a simulation fromf(x).{\displaystyle f(x).} This means that, with enough replicates, the algorithm generates a sample from the desired distributionf(x){\displaystyle f(x)}. There are a number of extensions to this algorithm, such as theMetropolis algorithm. This method relates to the general field ofMonte Carlotechniques, includingMarkov chain Monte Carloalgorithms that also use a proxy distribution to achieve simulation from the target distributionf(x){\displaystyle f(x)}. It forms the basis for algorithms such as theMetropolis algorithm. The unconditional acceptance probability is the proportion of proposed samples which are accepted, which isP(U≤f(Y)Mg(Y))=E⁡1[U≤f(Y)Mg(Y)]=E⁡[E⁡[1[U≤f(Y)Mg(Y)]|Y]](by tower property)=E⁡[P(U≤f(Y)Mg(Y)|Y)]=E⁡[f(Y)Mg(Y)](becausePr(U≤u)=u,whenUis uniform on(0,1))=∫y:g(y)>0f(y)Mg(y)g(y)dy=1M∫y:g(y)>0f(y)dy=1M(since support ofYincludes support ofX){\displaystyle {\begin{aligned}\mathbb {P} \left(U\leq {\frac {f(Y)}{Mg(Y)}}\right)&=\operatorname {E} \mathbf {1} _{\left[U\leq {\frac {f(Y)}{Mg(Y)}}\right]}\\[6pt]&=\operatorname {E} \left[\operatorname {E} [\mathbf {1} _{\left[U\leq {\frac {f(Y)}{Mg(Y)}}\right]}|Y]\right]&({\text{by tower property }})\\[6pt]&=\operatorname {E} \left[\mathbb {P} \left(U\leq {\frac {f(Y)}{Mg(Y)}}{\biggr |}Y\right)\right]\\[6pt]&=\operatorname {E} \left[{\frac {f(Y)}{Mg(Y)}}\right]&({\text{because }}\Pr(U\leq u)=u,{\text{when }}U{\text{ is uniform on }}(0,1))\\[6pt]&=\int \limits _{y:g(y)>0}{\frac {f(y)}{Mg(y)}}g(y)\,dy\\[6pt]&={\frac {1}{M}}\int \limits _{y:g(y)>0}f(y)\,dy\\[6pt]&={\frac {1}{M}}&({\text{since support of }}Y{\text{ includes support of }}X)\end{aligned}}}whereU∼Unif(0,1){\displaystyle U\sim \mathrm {Unif} (0,1)}, and the value ofy{\displaystyle y}each time is generated under the density functiong(⋅){\displaystyle g(\cdot )}of the proposal distributionY{\displaystyle Y}. The number of samples required fromY{\displaystyle Y}to obtain an accepted value thus follows ageometric distributionwith probability1/M{\displaystyle 1/M}, which has meanM{\displaystyle M}. Intuitively,M{\displaystyle M}is the expected number of the iterations that are needed, as a measure of the computational complexity of the algorithm. Rewrite the above equation,M=1P(U≤f(Y)Mg(Y)){\displaystyle M={\frac {1}{\mathbb {P} \left(U\leq {\frac {f(Y)}{Mg(Y)}}\right)}}}Note that1≤M<∞{\textstyle 1\leq M<\infty }, due to the above formula, whereP(U≤f(Y)Mg(Y)){\textstyle \mathbb {P} \left(U\leq {\frac {f(Y)}{Mg(Y)}}\right)}is a probability which can only take values in the interval[0,1]{\displaystyle [0,1]}. WhenM{\displaystyle M}is chosen closer to one, the unconditional acceptance probability is higher the less that ratio varies, sinceM{\displaystyle M}is the upper bound for the likelihood ratiof(x)/g(x){\textstyle f(x)/g(x)}. In practice, a value ofM{\displaystyle M}closer to 1 is preferred as it implies fewer rejected samples, on average, and thus fewer iterations of the algorithm. In this sense, one prefers to haveM{\displaystyle M}as small as possible (while still satisfyingf(x)≤Mg(x){\displaystyle f(x)\leq Mg(x)}, which suggests thatg(x){\displaystyle g(x)}should generally resemblef(x){\displaystyle f(x)}in some way. Note, however, thatM{\displaystyle M}cannot be equal to 1: such would imply thatf(x)=g(x){\displaystyle f(x)=g(x)}, i.e. that the target and proposal distributions are actually the same distribution. Rejection sampling is most often used in cases where the form off(x){\displaystyle f(x)}makes sampling difficult. A single iteration of the rejection algorithm requires sampling from the proposal distribution, drawing from a uniform distribution, and evaluating thef(x)/(Mg(x)){\displaystyle f(x)/(Mg(x))}expression. Rejection sampling is thus more efficient than some other method whenever M times the cost of these operations—which is the expected cost of obtaining a sample with rejection sampling—is lower than the cost of obtaining a sample using the other method. The algorithm, which was used byJohn von Neumann[4]and dates back toBuffonandhis needle,[5]obtains a sample from distributionX{\displaystyle X}with densityf{\displaystyle f}using samples from distributionY{\displaystyle Y}with densityg{\displaystyle g}as follows: The algorithm will take an average ofM{\displaystyle M}iterations to obtain a sample.[6] Rejection sampling can be far more efficient compared with the naive methods in some situations. For example, given a problem as samplingX∼F(⋅){\textstyle X\sim F(\cdot )}conditionally onX{\displaystyle X}given the setA{\displaystyle A}, i.e.,X|X∈A{\textstyle X|X\in A}, sometimesX{\textstyle X}can be easily simulated, using the naive methods (e.g. byinverse transform sampling): The problem is this sampling can be difficult and inefficient, ifP(X∈A)≈0{\textstyle \mathbb {P} (X\in A)\approx 0}. The expected number of iterations would be1P(X∈A){\displaystyle {\frac {1}{\mathbb {P} (X\in A)}}}, which could be close to infinity. Moreover, even when you apply the Rejection sampling method, it is always hard to optimize the boundM{\displaystyle M}for the likelihood ratio. More often than not,M{\displaystyle M}is large and the rejection rate is high, the algorithm can be very inefficient. TheNatural Exponential Family(if it exists), also known as exponential tilting, provides a class of proposal distributions that can lower the computation complexity, the value ofM{\displaystyle M}and speed up the computations (see examples: working with Natural Exponential Families). Given a random variableX∼F(⋅){\displaystyle X\sim F(\cdot )},F(x)=P(X≤x){\displaystyle F(x)=\mathbb {P} (X\leq x)}is the target distribution. Assume for simplicity, the density function can be explicitly written asf(x){\displaystyle f(x)}. Choose the proposal as Fθ(x)=E[exp⁡(θX−ψ(θ))I(X≤x)]=∫−∞xeθy−ψ(θ)f(y)dygθ(x)=Fθ′(x)=eθx−ψ(θ)f(x){\displaystyle {\begin{aligned}F_{\theta }(x)&=\mathbb {E} \left[\exp(\theta X-\psi (\theta ))\mathbb {I} (X\leq x)\right]\\&=\int _{-\infty }^{x}e^{\theta y-\psi (\theta )}f(y)dy\\g_{\theta }(x)&=F'_{\theta }(x)=e^{\theta x-\psi (\theta )}f(x)\end{aligned}}} whereψ(θ)=log⁡(Eexp⁡(θX)){\displaystyle \psi (\theta )=\log \left(\mathbb {E} \exp(\theta X)\right)}andΘ={θ:ψ(θ)<∞}{\displaystyle \Theta =\{\theta :\psi (\theta )<\infty \}}. Clearly,{Fθ(⋅)}θ∈Θ{\displaystyle \{F_{\theta }(\cdot )\}_{\theta \in \Theta }}, is from anatural exponential family. Moreover, the likelihood ratio is Z(x)=f(x)gθ(x)=f(x)eθx−ψ(θ)f(x)=e−θx+ψ(θ){\displaystyle Z(x)={\frac {f(x)}{g_{\theta }(x)}}={\frac {f(x)}{e^{\theta x-\psi (\theta )}f(x)}}=e^{-\theta x+\psi (\theta )}} Note thatψ(θ)<∞{\displaystyle \psi (\theta )<\infty }implies that it is indeed acumulant-generation function, that is, It is easy to derive the cumulant-generation function of the proposal and therefore the proposal's cumulants. As a simple example, suppose underF(⋅){\displaystyle F(\cdot )},X∼N(μ,σ2){\displaystyle X\sim \mathrm {N} (\mu ,\sigma ^{2})}, withψ(θ)=μθ+σ2θ22{\textstyle \psi (\theta )=\mu \theta +{\frac {\sigma ^{2}\theta ^{2}}{2}}}. The goal is to sampleX|X∈[b,∞]{\displaystyle X|X\in \left[b,\infty \right]}, whereb>μ{\displaystyle b>\mu }. The analysis goes as follows: holds, accept the value ofX{\displaystyle X}; if not, continue sampling newX∼i.i.d.N(μ+θ∗σ2,σ2){\textstyle X\sim _{i.i.d.}\mathrm {N} (\mu +\theta ^{*}\sigma ^{2},\sigma ^{2})}and newU∼Unif(0,1){\textstyle U\sim \mathrm {Unif} (0,1)}until acceptance. For the above example, as the measurement of the efficiency, the expected number of the iterations the natural exponential family based rejection sampling method is of orderb{\displaystyle b}, that isM(b)=O(b){\displaystyle M(b)=O(b)}, while under the naive method, the expected number of the iterations is1P(X≥b)=O(b⋅e(b−μ)22σ2){\textstyle {\frac {1}{\mathbb {P} (X\geq b)}}=O(b\cdot e^{\frac {(b-\mu )^{2}}{2\sigma ^{2}}})}, which is far more inefficient. In general,exponential tiltinga parametric class of proposal distribution, solves the optimization problems conveniently, with its useful properties that directly characterize the distribution of the proposal. For this type of problem, to simulateX{\displaystyle X}conditionally onX∈A{\displaystyle X\in A}, among the class of simple distributions, the trick is to use natural exponential family, which helps to gain some control over the complexity and considerably speed up the computation. Indeed, there are deep mathematical reasons for using natural exponential family. Rejection sampling requires knowing the target distribution (specifically, ability to evaluate target PDF at any point). Rejection sampling can lead to a lot of unwanted samples being taken if the function being sampled is highly concentrated in a certain region, for example a function that has a spike at some location. For many distributions, this problem can be solved using an adaptive extension (seeadaptive rejection sampling), or with an appropriate change of variables with the method of theratio of uniforms. In addition, as the dimensions of the problem get larger, the ratio of the embedded volume to the "corners" of the embedding volume tends towards zero, thus a lot of rejections can take place before a useful sample is generated, thus making the algorithm inefficient and impractical. Seecurse of dimensionality. In high dimensions, it is necessary to use a different approach, typically a Markov chain Monte Carlo method such asMetropolis samplingorGibbs sampling. (However, Gibbs sampling, which breaks down a multi-dimensional sampling problem into a series of low-dimensional samples, may use rejection sampling as one of its steps.) For many distributions, finding a proposal distribution that includes the given distribution without a lot of wasted space is difficult. An extension of rejection sampling that can be used to overcome this difficulty and efficiently sample from a wide variety of distributions (provided that they havelog-concavedensity functions, which is in fact the case for most of the common distributions—even those whosedensityfunctions are not concave themselves) is known asadaptive rejection sampling (ARS). There are three basic ideas to this technique as ultimately introduced by Gilks in 1992:[7] The method essentially involves successively determining an envelope of straight-line segments that approximates the logarithm better and better while still remaining above the curve, starting with a fixed number of segments (possibly just a single tangent line). Sampling from a truncated exponential random variable is straightforward. Just take the log of a uniform random variable (with appropriate interval and corresponding truncation). Unfortunately, ARS can only be applied for sampling from log-concave target densities. For this reason, several extensions of ARS have been proposed in literature for tackling non-log-concave target distributions.[9][10][11]Furthermore, different combinations of ARS and the Metropolis-Hastings method have been designed in order to obtain a universal sampler that builds a self-tuning proposal densities (i.e., a proposal automatically constructed and adapted to the target). This class of methods are often called asAdaptive Rejection Metropolis Sampling (ARMS) algorithms.[12][13]The resulting adaptive techniques can be always applied but the generated samples are correlated in this case (although the correlation vanishes quickly to zero as the number of iterations grows).
https://en.wikipedia.org/wiki/Rejection_sampling
Inactuarial science, theEsscher transform(Gerber & Shiu 1994) is a transform that takes aprobability densityf(x) and transforms it to a new probability densityf(x;h) with a parameterh. It was introduced by F. Esscher in 1932 (Esscher 1932). Letf(x) be a probability density. Its Esscher transform is defined as More generally, ifμis aprobability measure, the Esscher transform ofμis a new probability measureEh(μ) which hasdensity with respect toμ.
https://en.wikipedia.org/wiki/Esscher_transform
Anautonomous agentis anartificial intelligence(AI) system that can perform complex tasks independently.[1] There are various definitions of autonomous agent. According to Brustoloni (1991): "Autonomous agents are systems capable of autonomous, purposeful action in the real world."[2] According to Maes (1995): "Autonomous agents are computational systems that inhabit some complex dynamic environment, sense and act autonomously in this environment, and by doing so realize a set of goals or tasks for which they are designed."[3] Franklin and Graesser (1997) review different definitions and propose their definition: "An autonomous agent is a system situated within and a part of an environment that senses that environment and acts on it, over time, in pursuit of its own agenda and so as to effect what it senses in the future."[4] They explain that: "Humans and some animals are at the high end of being an agent, with multiple, conflicting drives, multiples senses, multiple possible actions, and complex sophisticated control structures. At the low end, with one or two senses, a single action, and an absurdly simple control structure we find a thermostat."[4] Lee et al. (2015) post safety issue from how the combination of external appearance and internal autonomous agent have impact on human reaction aboutautonomous vehicles. Their study explores the human-like appearance agent and high level of autonomy are strongly correlated with social presence, intelligence, safety and trustworthiness. In specific, appearance impacts most on affective trust while autonomy impacts most on both affective and cognitive domain of trust where cognitive trust is characterized by knowledge-based factors and affective trust is largely emotion driven[5]
https://en.wikipedia.org/wiki/Autonomous_agent
Biologically InspiredCognitive Architectures(BICA) was aDARPAproject administered by theInformation Processing Technology Office(IPTO). BICA began in 2005 and is designed to create the next generation ofcognitive architecturemodels of human artificial intelligence. Its first phase (Design) ran from September 2005 to around October 2006, and was intended to generate new ideas for biological architectures that could be used to create embodied computational architectures of human intelligence. The second phase (Implementation) of BICA was set to begin in the spring of 2007, and would have involved the actual construction of new intelligent agents that live and behave in avirtual environment. However, this phase was canceled by DARPA, reportedly because it was seen as being too ambitious.[1] Now BICA is atransdisciplinarystudy that aims to design, characterise and implement human-level cognitive architectures. There is also BICA Society, a scientific nonprofit organization formed to promote and facilitate this study.[2]On their website,[3]they have an extensive comparison table of various cognitive architectures.[4]
https://en.wikipedia.org/wiki/Biologically_inspired_cognitive_architectures
Understanding how the brain works is arguably one of the greatest scientific challenges of our time. TheWhite HouseBRAIN Initiative(Brain Research through Advancing Innovative Neurotechnologies) is a collaborative, public-private research initiative announced by theObama administrationon April 2, 2013, with the goal of supporting the development and application of innovative technologies that can create a dynamic understanding ofbrainfunction.[2][3][4][5][6] This activity is aGrand Challengefocused on revolutionizing our understanding of the human brain, and was developed by the White HouseOffice of Science and Technology Policy(OSTP) as part of a broaderWhite House Neuroscience Initiative.[7]Inspired by theHuman Genome Project, BRAIN aims to help researchers uncover the mysteries ofbrain disorders, such asAlzheimer'sandParkinson'sdiseases,depression, andtraumatic brain injury(TBI). Participants in BRAIN and affiliates of the project includeDARPAandIARPAas well as numerous private companies, universities, and other organizations in the United States, Australia, Canada, and Denmark.[8] The BRAIN Initiative reflects a number of influences, stemming back over a decade. Some of these include: planning meetings at theNational Institutes of Healththat led to the NIH's Blueprint for Neuroscience Research;[9]workshops at theNational Science Foundation(NSF) oncognition,neuroscience, andconvergent science, including a 2006 report on "Grand Challenges of Mind and Brain";[10]reports from theNational Research Counciland theInstitute of Medicine's Forum on Neuroscience and Nervous System Disorders, including "From Molecules to Mind: Challenges for the 21st Century," a report of a June 25, 2008 Workshop on Grand Challenges in Neuroscience.;[11]years of research and reports from scientists and professional societies; and congressional interest. One important activity was theBrain Activity Map Project. In September 2011, molecular biologist Miyoung Chun ofThe Kavli Foundationorganized a conference in London, at which scientists first put forth the idea of such a project.[4][12]At subsequent meetings, scientists from US government laboratories, including members of theOffice of Science and Technology Policy, and from theHoward Hughes Medical Instituteand theAllen Institute for Brain Science, along with representatives fromGoogle,Microsoft, andQualcomm, discussed possibilities for a future government-led project.[2] Other influences included the interdisciplinary "Decade of the Mind" project led by James L. Olds, who is currently the Assistant Director for Biological Sciences at NSF,[13][14]and the "Revolutionizing Prosthetics" project atDARPA, led by Dr.Geoffrey Lingand shown on60 Minutesin April 2009.[15] Development of the plan for the BRAIN Initiative within theExecutive Office of the President(EOP) was led by OSTP and included the following EOP staff:Philip Rubin, then Principal Assistant Director for Science and leader of the White House Neuroscience Initiative;Thomas Kalil, Deputy Director for Technology and Innovation;Cristin Dorgelo, then Assistant Director for Grand Challenges, and later Chief of Staff at OSTP; and Carlos Peña, Assistant Director for Emerging Technologies and currently the Division Director for the Division of Neurological and Physical Medicine Devices, in the Office of Device Evaluation, Center for Devices and Radiological Health (CDRH), at the U.S.Food and Drug Administration(FDA).[16][17] On April 2, 2013, at a White House event, PresidentBarack Obamaannounced The BRAIN Initiative, with proposed initial expenditures for fiscal year 2014 of approximately $110 million from theDefense Advanced Research Projects Agency(DARPA), theNational Institutes of Health(NIH), and theNational Science Foundation(NSF).[4][5][6]The President also directed thePresidential Commission for the Study of Bioethical Issuesto explore the ethical, legal, and societal implications raised by the initiative and by neuroscience in general. Additional commitments were also made by theAllen Institute for Brain Science, theHoward Hughes Medical Institute, andThe Kavli Foundation. The NIH also announced the creation of a working group of the Advisory Committee to the Director, led by neuroscientistsCornelia BargmannandWilliam Newsomeand withex officioparticipation from DARPA and NSF, to help shape NIH's role in the BRAIN Initiative. NSF planned to receive advice from its directorate advisory committees, from theNational Science Board, and from a series of meetings bringing together scientists in neuroscience and related areas.[4][5][6] News reports said the research would map the dynamics of neuron activity inmice and other animals[3]and eventually the tens of billions of neurons in the human brain.[18] In a 2012 scientific commentary outlining experimental plans for a more limited project, Alivisatoset al.outlined a variety of specific experimental techniques that might be used to achieve what they termed a "functionalconnectome", as well as new technologies that will have to be developed in the course of the project.[1]They indicated that initial studies might be done inCaenorhabditis elegans, followed byDrosophila, because of their comparatively simple neural circuits. Mid-term studies could be done inzebrafish,mice, and theEtruscan shrew, with studies ultimately to be done inprimatesand humans. They proposed the development ofnanoparticlesthat could be used asvoltagesensorsthat would detect individualaction potentials, as well asnanoprobesthat could serve aselectrophysiologicalmultielectrode arrays. In particular, they called for the use of wireless, noninvasive methods of neuronal activity detection, either utilizingmicroelectronicvery-large-scale integration, or based onsynthetic biologyrather than microelectronics. In one such proposed method, enzymatically producedDNAwould serve as a "ticker tape record" of neuronal activity,[1][19]based oncalciumion-induced errors in coding byDNA polymerase.[20]Data would be analyzed and modeled by large scalecomputation.[1]A related technique proposed the use of high-throughputDNA sequencingfor rapidly mapping neural connectivity.[21] The timeline proposed by the Working Group in 2014 is:[22] The advisory committee is:[23] As of December 2018, the BRAIN Initiative website lists the following participants and affiliates: Scientists offered differing views of the plan. Neuroscientist John Donoghue said that the project would fill a gap inneuroscienceresearch between, on the one hand, activity measurements at the level of brain regions using methods such asfMRI, and, on the other hand, measurements at the level ofsingle cells.[3]Psychologist Ed Vul expressed concern, however, that the initiative would divert funding from individual investigator studies.[3]Neuroscientist Donald Stein expressed concern that it would be a mistake to begin by spending money on technological methods, before knowing exactly what would be measured.[4]PhysicistMichael Roukesargued instead that methods in nanotechnology are becoming sufficiently mature to make the time right for a brain activity map.[4]NeuroscientistRodolfo Llinásdeclared at the first Rockefeller meeting "What has happened here is magnificent, never before in neuroscience have I seen so much unity in such a glorious purpose."[24] The projects face great logistical challenges. Neuroscientists estimated that the project would generate 300exabytesof data every year, presenting a significant technical barrier.[25]Most of the available high-resolution brain activity monitors are of limited use, as they must be invasively implanted surgically by opening the skull.[25]Parallels have been drawn to past large-scale government-led research efforts including themap of the human genome, thevoyage to the moon, and thedevelopment of the atomic bomb.[2]
https://en.wikipedia.org/wiki/BRAIN_Initiative
The following table comparescognitive architectures.
https://en.wikipedia.org/wiki/Cognitive_architecture_comparison
Cognitive computingrefers totechnology platformsthat, broadly speaking, are based on the scientific disciplines ofartificial intelligenceandsignal processing. These platforms encompassmachine learning,reasoning,natural language processing,speech recognitionandvision(object recognition),human–computer interaction,dialogandnarrative generation, among other technologies.[1][2] At present, there is no widely agreed upon definition for cognitive computing in eitheracademiaor industry.[1][3][4] In general, the term cognitive computing has been used to refer to new hardware and/or software thatmimics the functioningof thehuman brain[5][6][7][8][9](2004). In this sense, cognitive computing is a new type of computing with the goal of more accurate models of how the human brain/mindsenses,reasons, and responds to stimulus. Cognitive computing applications linkdata analysisand adaptive page displays (AUI) to adjust content for a particular type of audience. As such, cognitive computing hardware and applications strive to be moreaffectiveand more influential by design. The term "cognitive system" also applies to any artificial construct able to perform a cognitive process where a cognitive process is the transformation of data, information, knowledge, or wisdom to a new level in theDIKW Pyramid.[10]While many cognitive systems employ techniques having their origination inartificial intelligenceresearch, cognitive systems, themselves, may not be artificially intelligent. For example, aneural networktrained to recognize cancer on anMRIscan may achieve a higher success rate than a human doctor. This system is certainly a cognitive system but is not artificially intelligent. Cognitive systems may be engineered to feed on dynamic data in real-time, or near real-time,[11]and may draw on multiple sources of information, including both structured andunstructureddigital information, as well as sensory inputs (visual, gestural, auditory, or sensor-provided).[12] Cognitive computing-branded technology platforms typically specialize in the processing and analysis of large,unstructured datasets.[13] Cognitive computing in conjunction withbig dataandalgorithmsthat comprehendcustomer needs, can be a major advantage in economicdecision making. The powers of cognitive computing and artificial intelligence hold the potential to affect almost every task that humans are capable of performing. This can negatively affect employment for humans, as there would be no such need for human labor anymore. It would also increase theinequality of wealth; the people at the head of the cognitive computing industry would grow significantly richer, while workers without ongoing, reliable employment would become less well off.[22] The more industries start to use cognitive computing, the more difficult it will be for humans to compete.[22]Increased use of the technology will also increase the amount of work that AI-drivenrobotsand machines can perform. Only extraordinarily talented, capable and motivated humans would be able to keep up with the machines. The influence of competitive individuals in conjunction with artificial intelligence/cognitive computing with has the potential to change the course of humankind.[23]
https://en.wikipedia.org/wiki/Cognitive_computing
Google Brainwas adeep learningartificial intelligenceresearch team that served as the sole AI branch of Google before being incorporated under the newer umbrella ofGoogle AI, a research division at Google dedicated to artificial intelligence. Formed in 2011, it combined open-ended machine learning research with information systems and large-scale computing resources.[1]It created tools such asTensorFlow, which allow neural networks to be used by the public, and multiple internal AI research projects,[2]and aimed to create research opportunities inmachine learningandnatural language processing.[2]It was merged into former Google sister company DeepMind to formGoogle DeepMindin April 2023. The Google Brain project began in 2011 as a part-time research collaboration between Google fellowJeff Deanand Google Researcher Greg Corrado.[3]Google Brain started as aGoogle Xproject and became so successful that it was graduated back to Google:Astro Tellerhas said that Google Brain paid for the entire cost ofGoogle X.[4] In June 2012, theNew York Timesreported that a cluster of 16,000processorsin 1,000computersdedicated to mimicking some aspects ofhuman brain activityhad successfully trained itself to recognize acatbased on 10 million digital images taken fromYouTubevideos.[3]The story was also covered byNational Public Radio.[5] In March 2013, Google hiredGeoffrey Hinton, a leading researcher in thedeep learningfield, and acquired the company DNNResearch Inc. headed by Hinton. Hinton said that he would be dividing his future time between his university research and his work at Google.[6] In April 2023, Google Brain merged with Google sister company DeepMind to formGoogle DeepMind, as part of the company's continued efforts to accelerate work on AI.[7] Google Brain was initially established by Google FellowJeff Deanand visiting Stanford professorAndrew Ng. In 2014, the team includedJeff Dean,Quoc Le,Ilya Sutskever,Alex Krizhevsky,Samy Bengio, and Vincent Vanhoucke. In 2017, team members included Anelia Angelova,Samy Bengio, Greg Corrado, George Dahl, Michael Isard, Anjuli Kannan, Hugo Larochelle, Chris Olah, Salih Edneer, Benoit Steiner, Vincent Vanhoucke, Vijay Vasudevan, andFernanda Viegas.[8]Chris Lattner, who createdApple's programming languageSwiftand then ranTesla's autonomy team for six months, joined Google Brain's team in August 2017.[9]Lattner left the team in January 2020 and joinedSiFive.[10] As of 2021[update], Google Brain was led byJeff Dean,Geoffrey Hinton, andZoubin Ghahramani. Other members include Katherine Heller, Pi-Chuan Chang, Ian Simon, Jean-Philippe Vert, Nevena Lazic, Anelia Angelova, Lukasz Kaiser, Carrie Jun Cai, Eric Breck, Ruoming Pang, Carlos Riquelme, Hugo Larochelle, and David Ha.[8]Samy Bengioleft the team in April 2021,[11]andZoubin Ghahramanitook on his responsibilities. Google Research includes Google Brain and is based inMountain View, California. It also has satellite groups inAccra,Amsterdam,Atlanta,Beijing,Berlin,Cambridge (Massachusetts),Israel,Los Angeles,London,Montreal,Munich,New York City,Paris,Pittsburgh,Princeton,San Francisco,Seattle,Tokyo,Toronto, andZürich.[12] In October 2016, Google Brain designed an experiment to determine thatneural networksare capable of learning securesymmetric encryption.[13]In this experiment, threeneural networkswere created: Alice, Bob and Eve.[14]Adhering to the idea of agenerative adversarial network(GAN), the goal of the experiment was for Alice to send an encrypted message to Bob that Bob could decrypt, but the adversary, Eve, could not.[14]Alice and Bob maintained an advantage over Eve, in that they shared akeyused forencryptionanddecryption.[13]In doing so, Google Brain demonstrated the capability ofneural networksto learn secureencryption.[13] In February 2017, Google Brain determined aprobabilistic methodfor converting pictures with 8x8resolutionto a resolution of 32x32.[15][16]The method built upon an already existing probabilistic model called pixelCNN to generate pixel translations.[17][18] The proposed software utilizes twoneural networksto make approximations for thepixelmakeup of translated images.[16][19]The first network, known as the "conditioning network," downsizeshigh-resolutionimages to 8x8 and attempts to create mappings from the original 8x8 image to these higher-resolution ones.[16]The other network, known as the "prior network," uses the mappings from the previous network to add more detail to the original image.[16]The resulting translated image is not the same image in higher resolution, but rather a 32x32 resolution estimation based on other existing high-resolution images.[16]Google Brain's results indicate the possibility for neural networks to enhance images.[20] The Google Brain team contributed to theGoogle Translateproject by employing a new deep learning system that combines artificial neural networks with vast databases ofmultilingualtexts.[21]In September 2016,Google Neural Machine Translation(GNMT) was launched, an end-to-end learning framework, able to learn from a large number of examples.[21]Previously, Google Translate's Phrase-Based Machine Translation (PBMT) approach would statistically analyze word by word and try to match corresponding words in other languages without considering the surrounding phrases in the sentence.[22]But rather than choosing a replacement for each individual word in the desired language, GNMT evaluates word segments in the context of the rest of the sentence to choose more accurate replacements.[2]Compared to older PBMT models, the GNMT model scored a 24% improvement in similarity to human translation, with a 60% reduction in errors.[2][21]The GNMT has also shown significant improvement for notoriously difficult translations, likeChinesetoEnglish.[21] While the introduction of the GNMT has increased the quality of Google Translate's translations for the pilot languages, it was very difficult to create such improvements for all of its 103 languages. Addressing this problem, the Google Brain Team was able to develop aMultilingualGNMTsystem, which extended the previous one by enabling translations between multiple languages. Furthermore, it allows for Zero-Shot Translations, which are translations between two languages that the system has never explicitly seen before.[23]Google announced that Google Translate can now also translate without transcribing, using neural networks. This means that it is possible to translate speech in one language directly into text in another language, without first transcribing it to text. According to the Researchers at Google Brain, this intermediate step can be avoided using neural networks. In order for the system to learn this, they exposed it to many hours of Spanish audio together with the corresponding English text. The different layers of neural networks, replicating the human brain, were able to link the corresponding parts and subsequently manipulate the audio waveform until it was transformed to English text.[24]Another drawback of the GNMT model is that it causes the time of translation to increase exponentially with the number of words in the sentence.[2]This caused the Google Brain Team to add 2000 more processors to ensure the new translation process would still be fast and reliable.[22] Aiming to improve traditional robotics control algorithms where new skills of a robot need to behand-programmed, robotics researchers at Google Brain are developingmachine learningtechniques to allow robots to learn new skills on their own.[25]They also attempt to develop ways for information sharing between robots so that robots can learn from each other during their learning process, also known ascloud robotics.[26]As a result, Google has launched the Google Cloud Robotics Platform for developers in 2019, an effort to combinerobotics,AI, and thecloudto enable efficient robotic automation through cloud-connected collaborative robots.[26] Robotics research at Google Brain has focused mostly on improving and applying deep learning algorithms to enable robots to complete tasks by learning from experience, simulation, human demonstrations, and/or visual representations.[27][28][29][30]For example, Google Brain researchers showed that robots can learn to pick and throw rigid objects into selected boxes by experimenting in an environment without being pre-programmed to do so.[27]In another research, researchers trained robots to learn behaviors such as pouring liquid from a cup; robots learned from videos of human demonstrations recorded from multiple viewpoints.[29] Google Brain researchers have collaborated with other companies and academic institutions on robotics research. In 2016, the Google Brain Team collaborated with researchers atXin a research on learning hand-eye coordination for robotic grasping.[31]Their method allowed real-time robot control for grasping novel objects with self-correction.[31]In 2020, researchers from Google Brain, Intel AI Lab, and UC Berkeley created an AI model for robots to learn surgery-related tasks such as suturing from training with surgery videos.[30] In 2020, Google Brain Team andUniversity of Lillepresented a model for automatic speaker recognition which they called Interactive Speaker Recognition. The ISR module recognizes a speaker from a given list of speakers only by requesting a few user specific words.[32]The model can be altered to choose speech segments in the context ofText-To-SpeechTraining.[32]It can also prevent malicious voice generators from accessing the data.[32] TensorFlow is an open source software library powered by Google Brain that allows anyone to utilize machine learning by providing the tools to train one's own neural network.[2]The tool has been used to develop software using deep learning models that farmers use to reduce the amount of manual labor required to sort their yield, by training it with a data set of human-sorted images.[2] Magenta is a project that uses Google Brain to create new information in the form of art and music rather than classify and sort existing data.[2]TensorFlowwas updated with a suite of tools for users to guide theneural networkto create images and music.[2]However, the team fromValdosta State Universityfound that theAIstruggles to perfectly replicate human intention inartistry, similar to the issues faced intranslation.[2] The image sorting capabilities of Google Brain have been used to help detect certain medical conditions by seeking out patterns that human doctors may not notice to provide an earlier diagnosis.[2]During screening for breast cancer, this method was found to have one quarter the false positive rate of human pathologists, who require more time to look over each photo and cannot spend their entire focus on this one task.[2]Due to the neural network's very specific training for a single task, it cannot identify other afflictions present in a photo that a human could easily spot.[2] Thetransformerdeep learning architecture was invented by Google Brain researchers in 2017, and explained in the scientific paperAttention Is All You Need.[33]Google owns apatenton this widely used architecture, but hasn't enforced it.[34][35] Google Brain announced in 2022 that it created two different types oftext-to-image modelscalled Imagen and Parti that compete withOpenAI'sDALL-E.[36][37] Later in 2022, the project was extended to text-to-video.[38] Imagen development was transferred toGoogle Deepmindafter the merger with Deepmind.[39] The Google Brain projects' technology is currently used in various other Google products such as theAndroid Operating System'sspeech recognition system, photo search forGoogle Photos, smart reply inGmail, and video recommendations inYouTube.[40][41][42] Google Brain has received coverage inWired,[43][44][45]NPR,[5]andBig Think.[46]These articles have contained interviews with key team members Ray Kurzweil and Andrew Ng, and focus on explanations of the project's goals and applications.[43][5][46] In December 2020, AI ethicistTimnit Gebruleft Google.[47]While the exact nature of her quitting or being fired is disputed, the cause of the departure was her refusal to retract a paper entitled "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?" and a related ultimatum she made, setting conditions to be met otherwise she would leave.[47]This paper explored potential risks of the growth of AI such as Google Brain, including environmental impact, biases in training data, and the ability to deceive the public.[47][48]The request to retract the paper was made by Megan Kacholia, vice president of Google Brain.[49]As of April 2021, nearly 7000 current or former Google employees and industry supporters have signed an open letter accusing Google of "research censorship" and condemning Gebru's treatment at the company.[50] In February 2021, Google fired one of the leaders of the company's AI ethics team,Margaret Mitchell.[49]The company's statement alleged that Mitchell had broken company policy by using automated tools to find support for Gebru.[49]In the same month, engineers outside the ethics team began to quit, citing the termination of Gebru as their reason for leaving.[51]In April 2021, Google Brain co-founderSamy Bengioannounced his resignation from the company.[11]Despite being Gebru's manager, Bengio was not notified before her termination, and he posted online in support of both her and Mitchell.[11]While Bengio's announcement focused on personal growth as his reason for leaving, anonymous sources indicated to Reuters that the turmoil within the AI ethics team played a role in his considerations.[11] In March 2022, Google fired AI researcher Satrajit Chatterjee after he questioned the findings of a paper published inNature, by Google's AI team members, Anna Goldie and Azalia Mirhoseini.[52][53]This paper reported good results from the use of AI techniques (in particular reinforcement learning) for theplacement problemforintegrated circuits.[54]However, this result is quite controversial,[55][56][57]as the paper does not contain head-to-head comparisons to existing placers, and is difficult to replicate due to proprietary content. At least one initially favorable commentary has been retracted upon further review,[58]and the paper is under investigation by Nature.[59]
https://en.wikipedia.org/wiki/Google_Brain
Inartificial intelligence, knowledge-based agents draw on a pool of logical sentences to infer conclusions about theworld. At theknowledge level, we only need to specify what the agent knows and what its goals are; a logical abstraction separate from details of implementation. This notion of knowledge level was first introduced byAllen Newellin the 1980s, to have a way to rationalize an agent's behavior. The agent takes actions based on knowledge it possesses, in an attempt to reach specific goals. It chooses actions according to theprinciple of rationality. Beneath the knowledge level resides thesymbol level. Whereas the knowledge level isworldoriented, namely that it concerns the environment in which the agent operates, the symbol level issystemoriented, in that it includes the mechanisms the agent has available to operate. The knowledge levelrationalizesthe agent's behavior, while the symbol levelmechanizesthe agent's behavior. For example, in a computer program, the knowledge level consists of the information contained in its data structures that it uses to perform certain actions. The symbol level consists of the program's algorithms, the data structures themselves, and so on.
https://en.wikipedia.org/wiki/Knowledge_level
TheModular Cognition Framework(MCF) is an open-ended theoretical framework for research into the way the mind is organized. It draws on the common ground shared by contemporary research in the various areas that are collectively known ascognitive scienceand is designed to be applicable to all these fields of research. It was established, by Michael Sharwood Smith and John Truscott in the first decade of the 21st century with a particular focus on language cognition when it was known as theMOGUL framework(Modular Online Growth and Use of Language). The MCF is open-ended in the sense that it has a set of basic principles (see below) describing the architecture of the human mind: these amounts to setting out askeleton model of the mindand providing a template for cognitive scientists to use. Both mind and brain are viewed asbiologicalphenomena but at different levels of abstraction. These fundamental principles can be further interpreted in various ways by any researcher who is working with a theoretical approach that can be said to reflect, or can be aligned with the basic principles. In doing so researchers can identify their own hypotheses and research findings not only as confirming or challenging their own theory but also as a manifestation of the basic principles underlying all cognitive processing and representation. By the end of 2020 four books based specifically on the framework had been published along with over 35 articles and chapters; numerous publications and theses by researchers using the MCF for their own purposes had also appeared. This has built on the framework giving it a richer, more elaborate structure in those areas that have been investigated. Nonetheless, different version of the elaborations can still be proposed. The predominant assumption of the MCF is that the mind is composed of acollaborative network of functionally specialized systemswhich have evolved over time together with their physical manifestations in the brain that reflect their abstract organization albeit in very different ways. Researchers working in very different areas of cognitive science ought to be able without difficulty to see each other's research as an elaboration of the same framework. 1.Functional Specialisation.The mind has a modular architecture. This means it has a finite set of functionally specialised cognitive systems such as the auditory system, the motor system and the conceptual system. 2.Mind/Brain Relationship. Cognitive systems are manifested in the physical brain in various, often very different ways. This means thatmindandbrain, although two side of the same coin, still requiredistinctly different levels of description and explanation. 3.Representational Diversity. Each system has its own unique operating principles such that its representations are formed in an identifiable manner and in ways that distinguish them from representations in any other system. The structure of any given representation is coded in such a way as to allow it to form more complex representations of the same kind, i.e. within its own system. Primitive representations in each system are the simplest and are provided in advance as part of our biological inheritance. In this way meaning (conceptual) representations can be combined with other conceptual representations to form more complex meanings. 4.Association.These cognitive systems form an interactive network allowing representations in different systems to be associated (but see below). 5.Information Encapsulation. Due to the different codes in which representations of various types are written, one cognitive system cannot share information with another cognitive system. Associated representations of different types can only be associated and coactivated during online processing. 6.Coactivation. In response to current experience, associated representations across the mind as a whole are coactivated in parallel forming temporary online representational networks orschemas. 7.Each Mind is Unique. The way in which combinations of representations of thesametype are formed within a given system and the ways in which associations of representations ofdifferenttypes are formed over the lifetime of one individual make the mind of that individual unique. In other words, the fixed architecture of the mind still allows everyone to be different from everyone else and to respond to new experiences in different ways. 9.Acquisition by Processing. Change (development, acquisition, growth) occurs as a result of online processing. This principle is reflected in the following statement:acquisition is the lingering effect of processing(Truscott and Sharwood Smith 2004a),[1](Truscott and Sharwood Smith 2004).[2] 9.Variable Activation Levels. Cognitive representations are activated online to different degrees and may compete with one another for participation in the building of a more complex representations online. This is partly because they possess aresting level of activationwhich will rise or decline according to the frequency and regularity with which they are activated. Extremely high levels of activation are associated with phenomena described variously asattention,awarenessandconsciousness. Higher consciousness, which engages thought processes, is characterised by particularly intense levels of activation. This involves networks of representations (schemas) that simultaneously engage most if not all of the mind's systems. During this widespread, synchronised activity, the content of particular conceptual representations along with all their associated representations in other systems appears in our minds in perceptual form,in our mind's eyeas the expression goes. This is achieved via the combined activation of the five sensory perceptual systems. Concepts are thus transformed, orprojectedinto thought processes as percepts. 10.LanguageversusLinguistic. Humanlanguagedevelopment and use comes from product of the online interaction ofall cognitive systems. However, it qualifies ashuman languageby virtue of one, or two (depending the linguistic-theoretical perspective adopted) functionally specialized systems that have evolved specifically to handlelinguisticstructure. Each functionally specialized system (module) has a common structure consisting of astoreand aprocessor. This store/processor combination holds for all systems and is a simple, abstract version of what, in its neural manifestation, can involve multiple locations and pathways in the physical brain. The processor is run according to the special operating principles of the given cognitive system as determined by the theory adopted by researchers in their relevant area of specialization. It controls the creation and combination online of its representations. The store is where representations are housed at various resting levels of activation and where they are activated. The mind does not have a single memory where all representations are stored and activated: it has many, that is, one store for each system. In a given store, an activated representation, complex or otherwise, is said to be in that system'sworking memory(WM). In other words, WM is astateand not a system in its own right ([3]Cowan 1999). In a more general sense, WM can be thought of as a combination of all the currently activated representations, each in their individual stores. Representations are also called (cognitive)structuresand this is reflected in the abbreviations. Hence a visual representation is called a visual structure and abbreviated asVS. Cognitive systems are linked byinterfaceswhich can be thought of as simple processors that enable the association and coactivation of representations in adjoining systems. Thevisual/auditory interface, for example, links these two sensory perceptual systems and allows a visual representation to be associated and coactivated with a given auditory representation. Where a visual representation of, say, a tree is associated with the abstract meaning TREE, this would be explained as an association occurring between the visual and conceptual systems, i.e. across the VS/CS interface. The set of cognitive systems can be conceptualised as consisting of two types. The first, forming anouterring, consists of the set ofperceptualsystems that each receive a particular type of raw input (visual, auditory, olfactory etc) from the external environment via the senses and each produce as their output their own cognitive representations of the world outside. This means that the world that we feel weknowas theexternalworld is actually the world that is representedinternallyin our five perceptual systems. Representations in these systems are collectively known asperceptualoutput structures (POpS). They are richly connected with one another and capable of the very high activation levels necessary for survival. This makes them an essential part of how conscious experience is to be explained. The second set of systems at aninnerordeeperlevel are not connected directly with raw input coming in from the environment. They comprise theconceptualsystem responsible for abstract meanings, theaffectivesystem which is responsible for positive and negative values and basic emotions, themotorsystem and thespatialsystem. The final system or set of two systems are responsible for creating linguistic structure. The MCF currently uses the two-system alternative following Jackendoff[4]are, respectively, thephonologicalsystem which associates specific auditory structures withphonological structures(PS) and thesyntacticsystem which associatessyntactic representations(SS) with meanings, i.e. conceptual structures (CS). Similarly, associations are also made between the two linguistic systems at the PS/SS interface. Inevitably the two linguistic systems are richly interconnected along with their direct connections with the conceptual and auditory system and also the visual system as well since it is currently assumed thatsign languageusers make direct associations between visual representations (VS) and representations in the phonological store hence making the phonological system do double duty ([5]Sandler 1999).
https://en.wikipedia.org/wiki/Modular_Cognition_Framework
Theneural correlates ofconsciousness(NCC) are the minimal set of neuronal events and mechanisms sufficient for the occurrence of themental statesto which they are related.[2]Neuroscientistsuseempirical approachesto discoverneural correlatesof subjective phenomena; that is, neural changes which necessarily and regularlycorrelatewith a specific experience.[3][4] Ascienceof consciousness must explain the exact relationship between subjective mental states and brain states, the nature of the relationship between theconsciousmindand theelectrochemicalinteractions in the body (mind–body problem). Progress inneuropsychologyandneurophilosophyhas come from focusing on the body rather than the mind. In this context the neuronal correlates of consciousness may be viewed as its causes, and consciousness may be thought of as a state-dependent property of an undefinedcomplex, adaptive, and highly interconnected biological system.[5] Discovering and characterizing neural correlates does not offer a causal theory of consciousness that can explain how particular systems experience anything, the so-calledhard problem of consciousness,[6]but understanding the NCC may be a step toward a causal theory. Most neurobiologists propose that the variables giving rise to consciousness are to be found at the neuronal level, governed by classical physics. There are theories proposed ofquantum consciousnessbased onquantum mechanics.[7] There is an apparent redundancy and parallelism in neural networks so, while activity in one group of neurons may correlate with a percept in one case, a different population may mediate a related percept if the former population is lost or inactivated. It may be that every phenomenal, subjective state has a neural correlate. Where the NCC can be induced artificially, the subject will experience the associated percept, while perturbing or inactivating the region of correlation for a specific percept will affect the percept or cause it to disappear, giving a cause-effect relationship from the neural region to the nature of the percept.[citation needed] Proposals that have been advanced over the years include: what characterizes the NCC? What are the commonalities between the NCC for seeing and for hearing? Will the NCC involve all thepyramidal neuronsin the cortex at any given point in time? Or only a subset of long-range projection cells in the frontal lobes that project to the sensory cortices in the back? Neurons that fire in a rhythmic manner? Neurons that fire in asynchronous manner?[8] The growing ability of neuroscientists to manipulate neurons using methods from molecular biology in combination with optical tools (e.g.,Adamantidis et al. 2007) depends on the simultaneous development of appropriate behavioral assays and model organisms amenable to large-scale genomic analysis and manipulation. The combination of fine-grained neuronal analysis in animals with increasingly more sensitive psychophysical and brain imaging techniques in humans, complemented by the development of a robust theoretical predictive framework, will hopefully lead to a rational understanding of consciousness, one of the central mysteries of life. Research has shown a correlation between significant measurable changes in brain structure at the end of the second trimester of the human fetus development, which facilitate the emergence of early consciousness in the fetus. These structural developments include the maturation of neural connections and the formation of key brain regions associated with sensory processing and emotional regulation. As these areas become more integrated, the fetus begins to exhibit responses to external stimuli, suggesting a nascent awareness of its environment. This early stage of consciousness is crucial, as it lays the foundation for later cognitive and social development, influencing how individuals will interact with the world around them after birth.[9] There are two common but distinct dimensions of the termconsciousness,[10]one involvingarousalandstates of consciousnessand the other involvingcontent of consciousnessandconscious states. To be consciousofanything the brain must be in a relatively high state of arousal (sometimes calledvigilance), whether in wakefulness orREM sleep, vividly experienced in dreams although usually not remembered. Brain arousal level fluctuates in acircadianrhythm but may be influenced by lack of sleep, drugs and alcohol, physical exertion, etc. Arousal can be measured behaviorally by the signal amplitude that triggers some criterion reaction (for instance, the sound level necessary to evoke an eye movement or a head turn toward the sound source). Clinicians use scoring systems such as theGlasgow Coma Scaleto assess the level of arousal in patients.[citation needed] High arousal states are associated with conscious states that have specific content, seeing, hearing, remembering, planning or fantasizing about something. Different levels or states of consciousness are associated with different kinds of conscious experiences. The "awake" state is quite different from the "dreaming" state (for instance, the latter has little or no self-reflection) and from the state of deep sleep. In all three cases the basic physiology of the brain is affected, as it also is inaltered states of consciousness, for instance after taking drugs or during meditation when conscious perception and insight may be enhanced compared to the normal waking state.[citation needed] Clinicians talk aboutimpaired states of consciousnessas in "thecomatose state", "thepersistent vegetative state" (PVS), and "theminimally conscious state" (MCS). Here, "state" refers to different "amounts" of external/physical consciousness, from a total absence in coma, persistent vegetative state and general anesthesia, to a fluctuating and limited form of conscious sensation in a minimally conscious state such as sleep walking or during a complex partialepilepticseizure.[11]The repertoire of conscious states or experiences accessible to a patient in a minimally conscious state is comparatively limited. In brain death there is no arousal, but it is unknown whether the subjectivity of experience has been interrupted, rather than its observable link with the organism. Functional neuroimaging have shown that parts of the cortex are still active in vegetative patients that are presumed to be unconscious;[12]however, these areas appear to be functionally disconnected from associative cortical areas whose activity is needed for awareness.[citation needed] The potentialrichness of conscious experienceappears to increase from deep sleep to drowsiness to full wakefulness, as might be quantified using notions from complexity theory that incorporate both the dimensionality as well as the granularity of conscious experience to give anintegrated-information-theoretical accountof consciousness.[13]As behavioral arousal increases so does the range and complexity of possible behavior. Yet in REM sleep there is a characteristicatonia, low motor arousal and the person is difficult to wake up, but there is still high metabolic and electric brain activity and vivid perception. Many nuclei with distinct chemical signatures in thethalamus,midbrainandponsmust function for a subject to be in a sufficient state of brain arousal to experience anything at all. These nuclei therefore belong to the enabling factors for consciousness. Conversely, it is likely that the specific content of any particular conscious sensation is mediated by particular neurons in the cortex and their associated satellite structures, including theamygdala,thalamus,claustrumand thebasal ganglia.[citation needed][original research?] The possibility of precisely manipulating visual percepts in time and space has madevisiona preferred modality in the quest for the NCC. Psychologists have perfected a number of techniques –masking,binocular rivalry,continuous flash suppression,motion induced blindness,change blindness,inattentional blindness– in which the seemingly simple and unambiguous relationship between a physical stimulus in the world and its associated percept in the privacy of the subject's mind is disrupted.[14]In particular a stimulus can be perceptually suppressed for seconds or even minutes at a time: the image is projected into one of the observer's eyes but is invisible, not seen. In this manner the neural mechanisms that respond to the subjective percept rather than the physical stimulus can be isolated, permitting visual consciousness to be tracked in the brain. In aperceptualillusion, the physical stimulus remains fixed while the percept fluctuates. The best known example is theNecker cubewhose 12 lines can be perceived in one of two different ways in depth. A perceptual illusion that can be precisely controlled isbinocular rivalry. Here, a small image, e.g., a horizontal grating, is presented to the left eye, and another image, e.g., a vertical grating, is shown to the corresponding location in the right eye. In spite of the constant visual stimulus, observers consciously see the horizontal grating alternate every few seconds with the vertical one. The brain does not allow for the simultaneous perception of both images. Logothetis and colleagues[16]recorded a variety of visual cortical areas in awake macaque monkeys performing a binocular rivalry task. Macaque monkeys can be trained to report whether they see the left or the right image. The distribution of the switching times and the way in which changing the contrast in one eye affects these leaves little doubt that monkeys and humans experience the same basic phenomenon. In the primary visual cortex (V1) only a small fraction of cells weakly modulated their response as a function of the percept of the monkey while most cells responded to one or the other retinal stimulus with little regard to what the animal perceived at the time. But in a high-level cortical area such as the inferior temporal cortex along theventral streamalmost all neurons responded only to the perceptually dominant stimulus, so that a "face" cell only fired when the animal indicated that it saw the face and not the pattern presented to the other eye. This implies that NCC involve neurons active in the inferior temporal cortex: it is likely that specific reciprocal actions of neurons in the inferior temporal and parts of the prefrontal cortex are necessary. A number offMRIexperiments that have exploited binocular rivalry and related illusions to identify the hemodynamic activity underlying visual consciousness in humans demonstrate quite conclusively that activity in the upper stages of the ventral pathway (e.g., thefusiform face areaand theparahippocampal place area) as well as in early regions, including V1 and the lateral geniculate nucleus (LGN), follow the percept and not the retinal stimulus.[17]Further, a number of fMRI[18][19]and DTI experiments[20]suggest V1 is necessary but not sufficient for visual consciousness.[21] In a related perceptual phenomenon,flash suppression, the percept associated with an image projected into one eye is suppressed by flashing another image into the other eye while the original image remains. Its methodological advantage over binocular rivalry is that the timing of the perceptual transition is determined by an external trigger rather than by an internal event. The majority of cells in the inferior temporal cortex and the superior temporal sulcus of monkeys trained to report their percept during flash suppression follow the animal's percept: when the cell's preferred stimulus is perceived, the cell responds. If the picture is still present on the retina but is perceptually suppressed, the cell falls silent, even though primary visual cortex neurons fire.[22][23]Single-neuron recordings in the medial temporal lobe of epilepsy patients during flash suppression likewise demonstrate abolishment of response when the preferred stimulus is present but perceptually masked.[24] Given the absence of any accepted criterion of the minimal neuronal correlates necessary for consciousness, the distinction between a persistently vegetative patient who shows regular sleep-wave transitions and may be able to move or smile, and a minimally conscious patient who can communicate (on occasion) in a meaningful manner (for instance, by differential eye movements) and who shows some signs of consciousness, is often difficult. In global anesthesia the patient should not experience psychological trauma but the level of arousal should be compatible with clinical exigencies. Blood-oxygen-level-dependentfMRIhave demonstrated normal patterns of brain activity in a patient in a vegetative state following a severe traumatic brain injury when asked to imagine playing tennis or visiting rooms in his/her house.[26]Differential brain imaging of patients with such global disturbances of consciousness (includingakinetic mutism) reveal that dysfunction in a widespread cortical network including medial and lateral prefrontal and parietal associative areas is associated with a global loss of awareness.[27]Impaired consciousness inepilepticseizures of thetemporal lobewas likewise accompanied by a decrease in cerebral blood flow in frontal and parietal association cortex and an increase in midline structures such as themediodorsal thalamus.[28] Relatively local bilateral injuries to midline (paramedian) subcortical structures can also cause a complete loss of awareness.[29]These structures thereforeenableand control brain arousal (as determined by metabolic or electrical activity) and are necessary neural correlates. One such example is the heterogeneous collection of more than two dozen nuclei on each side of the upper brainstem (pons, midbrain and in the posterior hypothalamus), collectively referred to as thereticular activating system(RAS). Their axons project widely throughout the brain. These nuclei – three-dimensional collections of neurons with their own cyto-architecture and neurochemical identity – release distinct neuromodulators such as acetylcholine, noradrenaline/norepinephrine, serotonin, histamine and orexin/hypocretin to control the excitability of the thalamus and forebrain, mediating alternation between wakefulness and sleep as well as general level of behavioral and brain arousal. After such trauma, however, eventually the excitability of the thalamus and forebrain can recover and consciousness can return.[30]Another enabling factor for consciousness are the five or moreintralaminar nuclei(ILN) of the thalamus. These receive input from many brainstem nuclei and project strongly, directly to the basal ganglia and, in a more distributed manner, into layer I of much of the neocortex. Comparatively small (1 cm3or less) bilateral lesions in the thalamic ILN completely knock out all awareness.[31] Many actions in response to sensory inputs are rapid, transient, stereotyped, and unconscious.[32]They could be thought of as cortical reflexes and are characterized by rapid and somewhat stereotyped responses that can take the form of rather complex automated behavior as seen, e.g., in complex partialepilepticseizures. These automated responses, sometimes calledzombie behaviors,[33]could be contrasted by a slower, all-purpose conscious mode that deals more slowly with broader, less stereotyped aspects of the sensory inputs (or a reflection of these, as in imagery) and takes time to decide on appropriate thoughts and responses. Without such a consciousness mode, a vast number of different zombie modes would be required to react to unusual events. A feature that distinguishes humans from most animals is that we are not born with an extensive repertoire of behavioral programs that would enable us to survive on our own ("physiological prematurity"). To compensate for this, we have an unmatched ability to learn, i.e., to consciously acquire such programs by imitation or exploration. Once consciously acquired and sufficiently exercised, these programs can become automated to the extent that their execution happens beyond the realms of our awareness. Take, as an example, the incredible fine motor skills exerted in playing a Beethoven piano sonata or the sensorimotor coordination required to ride a motorcycle along a curvy mountain road. Such complex behaviors are possible only because a sufficient number of the subprograms involved can be executed with minimal or even suspended conscious control. In fact, the conscious system may actually interfere somewhat with these automated programs.[34] From an evolutionary standpoint it clearly makes sense to have both automated behavioral programs that can be executed rapidly in a stereotyped and automated manner, and a slightly slower system that allows time for thinking and planning more complex behavior. This latter aspect may be one of the principal functions of consciousness. Other philosophers, however, have suggested that consciousness would not be necessary for any functional advantage in evolutionary processes.[35][36]No one has given a causal explanation, they argue, of why it would not be possible for a functionally equivalent non-conscious organism (i.e., aphilosophical zombie) to achieve the very same survival advantages as a conscious organism. If evolutionary processes are blind to the difference between functionFbeing performed by conscious organismOand non-conscious organismO*, it is unclear what adaptive advantage consciousness could provide.[37]As a result, an exaptive explanation of consciousness has gained favor with some theorists that posit consciousness did not evolve as an adaptation but was anexaptationarising as a consequence of other developments such as increases in brain size or cortical rearrangement.[38]Consciousness in this sense has been compared to the blind spot in the retina where it is not an adaption of the retina, but instead just a by-product of the way the retinal axons were wired.[39]Several scholars includingPinker,Chomsky,Edelman, andLuriahave indicated the importance of the emergence of human language as an important regulative mechanism of learning and memory in the context of the development of higher-order consciousness. It seems possible that visual zombie modes in the cortex mainly use thedorsal streamin the parietal region.[32]However, parietal activity can affect consciousness by producing attentional effects on the ventral stream, at least under some circumstances. The conscious mode for vision depends largely on the early visual areas (beyond V1) and especially on the ventral stream. Seemingly complex visual processing (such as detecting animals in natural, cluttered scenes) can be accomplished by the human cortex within 130–150 ms,[40][41]far too brief for eye movements and conscious perception to occur. Furthermore, reflexes such as theoculovestibular reflextake place at even more rapid time-scales. It is quite plausible that such behaviors are mediated by a purely feed-forward moving wave of spiking activity that passes from the retina through V1, into V4, IT and prefrontal cortex, until it affects motorneurons in the spinal cord that control the finger press (as in a typical laboratory experiment). The hypothesis that the basic processing of information is feedforward is supported most directly by the short times (approx. 100 ms) required for a selective response to appear in IT cells. Conversely, conscious perception is believed to require more sustained, reverberatory neural activity, most likely via global feedback from frontal regions of neocortex back to sensory cortical areas[21]that builds up over time until it exceeds a critical threshold. At this point, the sustained neural activity rapidly propagates to parietal, prefrontal and anterior cingulate cortical regions, thalamus, claustrum and related structures that support short-term memory, multi-modality integration, planning, speech, and other processes intimately related to consciousness. Competition prevents more than one or a very small number of percepts to be simultaneously and actively represented. This is the core hypothesis of theglobal workspace theoryof consciousness.[42][43] In brief, while rapid but transient neural activity in the thalamo-cortical system can mediate complex behavior without conscious sensation, it is surmised that consciousness requires sustained but well-organized neural activity dependent on long-range cortico-cortical feedback. The neurobiologistChristfried Jakob(1866–1956) argued that the only conditions which must have neural correlates are direct sensations and reactions; these are called "intonations".[citation needed] Neurophysiological studies in animals provided some insights on the neural correlates of conscious behavior.Vernon Mountcastle, in the early 1960s, set up to study this set of problems, which he termed "the Mind/Brain problem", by studying the neural basis of perception in thesomatic sensory system. His labs at Johns Hopkins were among the first, along with Edward V.Evarts at NIH, to record neural activity from behaving monkeys. Struck with the elegance of SS Stevens approach of magnitude estimation, Mountcastle's group discovered three different modalities of somatic sensation shared one cognitive attribute: in all cases the firing rate of peripheral neurons was linearly related to the strength of the percept elicited. More recently, Ken H. Britten, William T. Newsome, and C. Daniel Salzman have shown that inarea MTof monkeys, neurons respond with variability that suggests they are the basis of decision making about direction of motion. They first showed that neuronal rates are predictive of decisions using signal detection theory, and then that stimulation of these neurons could predictably bias the decision. Such studies were followed by Ranulfo Romo in the somatic sensory system, to confirm, using a different percept and brain area, that a small number of neurons in one brain area underlie perceptual decisions. Other lab groups have followed Mountcastle's seminal work relating cognitive variables to neuronal activity with more complex cognitive tasks. Although monkeys cannot talk about their perceptions, behavioral tasks have been created in which animals made nonverbal reports, for example by producing hand movements. Many of these studies employ perceptual illusions as a way to dissociate sensations (i.e., the sensory information that the brain receives) from perceptions (i.e., how the consciousness interprets them). Neuronal patterns that represent perceptions rather than merely sensory input are interpreted as reflecting the neuronal correlate of consciousness. Using such design,Nikos Logothetisand colleagues discovered perception-reflecting neurons in the temporal lobe. They created an experimental situation in which conflicting images were presented to different eyes (i.e.,binocular rivalry). Under such conditions, human subjects report bistable percepts: they perceive alternatively one or the other image. Logothetis and colleagues trained the monkeys to report with their arm movements which image they perceived. Temporal lobe neurons in Logothetis experiments often reflected what the monkeys' perceived. Neurons with such properties were less frequently observed in the primary visual cortex that corresponds to relatively early stages of visual processing. Another set of experiments using binocular rivalry in humans showed that certain layers of the cortex can be excluded as candidates of the neural correlate of consciousness. Logothetis and colleagues switched the images between eyes during the percept of one of the images. Surprisingly the percept stayed stable. This means that the conscious percept stayed stable and at the same time the primary input to layer 4, which is the input layer, in the visual cortex changed. Therefore, layer 4 can not be a part of the neural correlate of consciousness.Mikhail Lebedevand their colleagues observed a similar phenomenon in monkey prefrontal cortex. In their experiments monkeys reported the perceived direction of visual stimulus movement (which could be an illusion) by making eye movements. Some prefrontal cortex neurons represented actual and some represented perceived displacements of the stimulus. Observation of perception related neurons in prefrontal cortex is consistent with the theory ofChristof KochandFrancis Crickwho postulated that neural correlate of consciousness resides in prefrontal cortex. Proponents of distributed neuronal processing may likely dispute the view that consciousness has a precise localization in the brain. Francis Crickwrote a popular book, "The Astonishing Hypothesis", whose thesis is that the neural correlate for consciousness lies in our nerve cells and their associated molecules. Crick and his collaboratorChristof Koch[44]have sought to avoid philosophical debates that are associated with the study of consciousness, by emphasizing the search for "correlation" and not "causation".[needs update] There is much room for disagreement about the nature of this correlate (e.g., does it require synchronous spikes of neurons in different regions of the brain? Is the co-activation of frontal or parietal areas necessary?). The philosopherDavid Chalmersmaintains that a neural correlate of consciousness, unlike other correlates such as for memory, will fail to offer a satisfactory explanation of the phenomenon; he calls this thehard problem of consciousness.[45][46]
https://en.wikipedia.org/wiki/Neural_correlates_of_consciousness
Pandemonium architectureis a theory incognitive sciencethat describes how visual images are processed by the brain. It has applications inartificial intelligenceandpattern recognition. The theory was developed by the artificial intelligence pioneerOliver Selfridgein 1959. It describes the process of object recognition as the exchange of signals within a hierarchical system of detection and association, the elements of which Selfridge metaphorically termed "demons". This model is now recognized as the basis of visual perception in cognitive science. Pandemonium architecture arose in response to the inability oftemplate matching theoriesto offer abiologically plausibleexplanation of theimage constancy phenomenon. Contemporary[when?]researchers praise this architecture for its elegancy and creativity; that the idea of having multiple independent systems (e.g.,feature detectors) working in parallel to address the image constancy phenomena ofpattern recognitionis powerful yet simple. The basic idea of the pandemonium architecture is that a pattern is first perceived in its parts before the "whole".[1] Pandemonium architecture was one of the firstcomputational modelsin pattern recognition. Although not perfect, the pandemonium architecture influenced the development of modernconnectionist,artificial intelligence, andword recognitionmodels.[2] Most research inperceptionhas been focused on the visual system, investigating the mechanisms of how we see and understand objects. A critical function of our visual system is its ability to recognize patterns, but the mechanism by which this is achieved is unclear.[3] The earliest theory that attempted to explain how we recognize patterns is the template matching model. According to this model, we compare all external stimuli against an internal mental representation. If there is "sufficient" overlap between the perceived stimulus and the internal representation, we will "recognize" the stimulus. Although some machines follow a template matching model (e.g., bank machines verifying signatures and accounting numbers), the theory is critically flawed in explaining the phenomena of image constancy: we can easily recognize a stimulus regardless of the changes in its form of presentation (e.g.,Tand T are both easily recognized as the letter T). It is highly unlikely that we have a stored template for all of the variations of every single pattern.[4] As a result of the biological plausibility criticism of the template matching model, feature detection models began to rise. In a feature detection model, the image is first perceived in its basic individual elements before it is recognized as a whole object. For example, when we are presented with the letter A, we would first see a short horizontal line and two slanted long diagonal lines. Then we would combine the features to complete the perception of A. Each unique pattern consists of different combination of features, which means those that are formed with the same features will generate the same recognition. That is, regardless of how we rotate the letter A, is still perceived as the letter A. It is easy for this sort of architecture to account for the image constancy phenomena because you only need to "match" at the basic featural level, which is presumed to be limited and finite, thus biologically plausible. The best known feature detection model is called the pandemonium architecture.[4] The pandemonium architecture was originally developed byOliver Selfridgein the late 1950s. The architecture is composed of different groups of "demons" working independently to process the visual stimulus. Each group of demons is assigned to a specific stage in recognition, and within each group, the demons work in parallel. There are four major groups of demons in the original architecture.[3] The concept of feature demons, that there are specific neurons dedicated to perform specialized processing is supported by research in neuroscience.HubelandWieselfound there were specific cells in acat's brain that responded to specific lengths and orientations of a line. Similar findings were discovered infrogs,octopusesand a variety of other animals. Octopuses were discovered to be only sensitive to verticality of lines, whereas frogs demonstrated a wider range of sensitivity. These animal experiments demonstrate that feature detectors seem to be a very primitive development. That is, it did not result from the higher cognitive development of humans. Not surprisingly, there is also evidence that the human brain possesses these elementary feature detectors as well.[5][6][7] Moreover, this architecture is capable of learning, similar to a back-propagation styledneural network. The weight between the cognitive and feature demons can be adjusted in proportion to the difference between the correct pattern and the activation from the cognitive demons. To continue with our previous example, when we first learned the letter R, we know is composed of a curved, long straight, and a short angled line. Thus when we perceive those features, we perceive R. However, the letter P consists of very similar features, so during the beginning stages of learning, it is likely for this architecture to mistakenly identify R as P. But through constant exposure of confirming R's features to be identified as R, the weights of R's features to P are adjusted so the P response becomes inhibited (e.g., learning to inhibit the P response when a short angled line is detected). In principle, a pandemonium architecture can recognize any pattern.[8] As mentioned earlier, this architecture makes error predictions based on the amount of overlapping features. Such as, the most likely error for R should be P. Thus, in order to show this architecture represents the human pattern recognition system we must put these predictions into test. Researchers have constructed scenarios where various letters are presented in situations that make them difficult to identify; then types of errors were observed, which was used to generate confusion matrices: where all of the errors for each letter are recorded. Generally, the results from these experiments matched the error predictions from the pandemonium architecture. Also as a result of these experiments, some researchers have proposed models that attempted to list all of the basic features in theRoman alphabet.[9][10][11][12] A major criticism of the pandemonium architecture is that it adopts a completely bottom-up processing: recognition is entirely driven by the physical characteristics of the targeted stimulus. This means that it is unable to account for any top-down processing effects, such as context effects (e.g.,pareidolia), where contextual cues can facilitate (e.g., word superiority effect: it is relatively easier to identify a letter when it is part of a word than in isolation) processing. However, this is not a fatal criticism to the overall architecture, because is relatively easy to add a group of contextual demons to work along with the cognitive demons to account for these context effects.[13] Although the pandemonium architecture is built on the fact that it can account for the image constancy phenomena, some researchers have argued otherwise; and pointed out that the pandemonium architecture might share the same flaws from the template matching models. For example, the letter H is composed of 2 long vertical lines and a short horizontal line; but if we rotate the H 90 degrees in either direction, it is now composed of 2 long horizontal lines and a short vertical line. In order to recognize the rotated H as H, we would need a rotated H cognitive demon. Thus we might end up with a system that requires a large number of cognitive demons in order to produce accurate recognition, which would lead to the same biological plausibility criticism of the template matching models. However, it is rather difficult to judge the validity of this criticism because the pandemonium architecture does not specify how and what features are extracted from incoming sensory information, it simply outlines the possible stages of pattern recognition. But of course that raises its own questions, to which it is almost impossible to criticize such a model if it does not include specific parameters. Also, the theory appears to be rather incomplete without defining how and what features are extracted, which proves to be especially problematic with complex patterns (e.g., extracting the weight and features of a dog).[3][14] Some researchers have also pointed out that the evidence supporting the pandemonium architecture has been very narrow in its methodology. Majority of the research that supports this architecture has often referred to its ability to recognize simple schematic drawings that are selected from a small finite set (e.g., letters in the Roman alphabet). Evidence from these types of experiments can lead to overgeneralized and misleading conclusions, because the recognition process of complex, three-dimensional patterns could be very different from simple schematics. Furthermore, some have criticized the methodology used in generating the confusion matrix, because it confounds perceptual confusion (error in identification caused by overlapping features between the error and the correct answer) with post-perceptual guessing (people randomly guessing because they cannot be sure what they saw). However, these criticisms were somewhat addressed when similar results were replicated with other paradigms (e.g., go/no go and same-different tasks), supporting the claim that humans do have elementary feature detectors. These new paradigms relied on reaction time as the dependent variable, which also avoided the problem of empty cells that is inherent with the confusion matrix (statistical analyses are difficult to conduct and interpret when the data have empty cells).[7] Additionally, some researchers have pointed out that feature accumulation theories like the pandemonium architecture have the processing stages of pattern recognition almost backwards. This criticism was mainly used by advocates of the global-to-local theory, who argued and provided evidence that perception begins with a blurry view of the whole that refines overtime, implying feature extraction does not happen in the early stages of recognition.[15]However, there is nothing to prevent a demon from recognizing a global pattern in parallel with other demons recognizing local patterns within the global pattern. The pandemonium architecture has been applied to solve several real-world problems, such as translating hand-sentMorse codesand identifying hand-printed letters. The overall accuracy of pandemonium-based models are impressive, even when the system was given a short learning period. For example, Doyle constructed a pandemonium-based system with over 30 complex feature-analyzers. He then fed his system several hundred letters for learning. During this phase, the system analyzed the inputted letter and generated its own output (what the system identifies the letter as). The output from the system was compared against the correct identification, which sends an error signal back to the system to adjust the weights between the features analyzers accordingly. In the testing phase, unfamiliar letters were presented (different style and size of the letters than those that were presented in the learning phase), and the system was able to achieve a near 90% accuracy. Because of its impressive capability to recognize words, all modern theories on how humans read and recognize words follow this hierarchal structure: word recognition begins with feature extractions of the letters, which then activates the letter detectors[16](e.g., SOLAR,[17]SERIOL,[18]IA,[19]DRC[20]). Based on the original pandemonium architecture, John Jackson has extended the theory to explain phenomena beyond perception. Jackson offered the analogy of an arena to account for "consciousness". His arena consisted of a stand, a playing field, and a sub-arena. The arena was populated by a multitude of demons. The demons that were designated in the playing fields were the active demons, as they represent the active elements of human consciousness. The demons in the stands are to watch those in the playing field until something excites them; each demon is excited by different things. The more excited the demons get, the louder they yell. If a demon yells pass a set threshold, it gets to join the other demons in the playing field and perform its function, which may then excite other demons, and this cycle continues. The sub-arena in the analogy functions as the learning and feedback mechanism of the system. The learning system here is similar to any other neural styled networks, which is through modifying the connection strength between the demons; in other words, how the demons respond to each other's yelling. This multiple agent approach to human information processing became the assumption for many modern artificial intelligence systems.[21][22] Although the pandemonium architecture arose as a response to address a major criticism of the template matching theories, the two are actually rather similar in some sense: there is a process where a specific set of features for items is matched against some sort of mental representation. The critical difference between the two is that the image is directly compared against an internal representation in the template matching theories, whereas with the pandemonium architecture, the image is first diffused and processed at the featural level. This granted pandemonium architectures tremendous power because it is capable of recognizing a stimulus despite its changes in size, style and other transformations; without the presumption of an unlimited pattern memory. It is also unlikely that the template matching theories will function properly when faced with realistic visual inputs, where objects are presented in three dimensions and often occluded by other objects (e.g., half of a book is covered by a piece of paper, but we can still recognize it as a book with relative ease). Nonetheless, some researchers have conducted experiments comparing the two theories. Not surprisingly, the results often favored a hierarchal feature building model like the pandemonium architecture.[23][24][25] TheHebbian modelresembles feature-oriented theories like the pandemonium architecture in many aspects. The first level of processing in the Hebbian model is called the cell assemblies, which have very similar functions to feature demons. However, cell assemblies are more limited than the feature demons, because it can only extracts lines, angles and contours. The cell assemblies are combined to form phase sequences, which is very similar to the function of the cognitive demons. In a sense, many consider the Hebbian model to be a crossover between the template and feature matching theories, as the features extracted from the Hebbian models can be considered as basic templates.[8]
https://en.wikipedia.org/wiki/Pandemonium_architecture
Unified Theories of Cognitionis a 1990 book byAllen Newell.[1]Newell argues for the need of a set of generalassumptionsforcognitive modelsthat account for all of cognition: a unifiedtheoryofcognition, orcognitive architecture. The research started by Newell on unified theories of cognition represents a crucial element of divergence with respect to the vision of his long-term collaborator, and AI pioneer,Herbert Simonfor what concerns the future ofartificial intelligenceresearch.Antonio Lietorecently drew attention to such a discrepancy,[2]by pointing out thatHerbert Simondecided to focus on the construction of single simulative programs (or microtheories/"middle-range" theories) that were considered a sufficient mean to enable the generalisation of “unifying” theories of cognition (i.e. according to Simon the "unification" was assumed to be derivable from a body of qualitative generalizations coming from the study of individual simulative programs). Newell, on the other hand, didn’t consider the construction of single simulative microtheories a sufficient mean to enable the generalisation of “unifying” theories of cognition and, in fact, started the enterprise of studying and developing integrated and multi-tasking intelligence via cognitive architectures that would have led to the development of theSoar cognitive architecture. Newell argues that themindfunctions as a singlesystem. He also claims the establishedcognitive modelsare vastly underdetermined by experimental data. By cognition, Newell means: After arguing in favor of the development of unified theories of cognition, Newell puts forward a list of constraints to any unified theory, in that a theory should explain how a mind does the following: Newell's secondary task is to put forward thecognitive architectureSoaras an implementation of a UTC that meets the constraints above. Other efforts at unified theories of cognition cited in the book includeACT-Rand thehuman processor model.
https://en.wikipedia.org/wiki/Unified_theory_of_cognition
Never-Ending Language Learningsystem (NELL) is asemanticmachine learningsystemthat as of 2010 was being developed by a research team atCarnegie Mellon University, and supported by grants fromDARPA,Google,NSF, andCNPqwith portions of the system running on asupercomputingclusterprovided byYahoo!.[1] NELL was programmed by its developers to be able to identify a basic set of fundamental semantic relationships between a few hundred predefined categories of data, such as cities, companies, emotions and sports teams. Since the beginning of 2010, the Carnegie Mellon research team has been running NELL around the clock, sifting through hundreds of millions of web pages looking for connections between the information it already knows and what it finds through its search process – to make new connections in a manner that is intended to mimic the way humans learn new information.[2]For example, in encountering the word pair "Pikes Peak", NELL would notice that both words are capitalized and deduce from the second word that it was the name of a mountain, and then build on the relationship of words surrounding those two words to deduce other connections.[1] The goal of NELL and other semantic learning systems, such asIBM'sWatsonsystem, is to be able to develop means ofanswering questionsposed by users in natural language with no human intervention in the process.[3]Oren Etzioniof theUniversity of Washingtonlauded the system's "continuous learning, as if NELL is exercising curiosity on its own, with little human help".[1] By October 2010, NELL has doubled the number of relationships it has available in its knowledge base and has learned 440,000 new facts, with an accuracy of 87%.[4][1]Team leaderTom M. Mitchell, chairman of the machine learning department at Carnegie Mellon described how NELL "self-corrects when it has more information, as it learns more", though it does sometimes arrive at incorrect conclusions. Accumulated errors, such as the deduction thatInternet cookieswere a kind of baked good, led NELL to deduce from the phrases "I deleted my Internet cookies" and "I deleted my files" that "computer files" also belonged in the baked goods category.[5]Clear errors like these are[when?]corrected every few weeks by the members of the research team and the system is allowed to continue its learning process.[1]By 2018, NELL had "acquired a knowledge base with 120mn diverse, confidence-weighted beliefs (e.g.,servedWith(tea,biscuits)), while learning thousands of interrelated functions that continually improve its reading competence over time."[6] As of September 2023, the project's most recently gathered facts dated from February 2019 (according to its Twitter feed)[7]or September 2018 (according to its home page).[8] In his 2019 book "Human Compatible",Stuart Russellcommented that 'Unfortunately NELL has confidence in only 3 percent of its beliefs and relies on human experts to clean out false or meaningless beliefs on a regular basis—such as its beliefs that “Nepal is a country also known as United States” and "value is an agricultural product that is usually cut into basis."'[9]A 2023 paper commented that "While thenever-endingpart seems like the right approach, NELL still had the drawback that its focus remained much too grounded on object-language descriptions, and relied on web pages as its only source, which significantly influenced the type of grammar, symbolism, slang, etc. analysed."[10]
https://en.wikipedia.org/wiki/Never-Ending_Language_Learning
Open Mind Common Sense(OMCS) is anartificial intelligenceproject based at theMassachusetts Institute of Technology(MIT)Media Labwhose goal is to build and utilize a largecommonsense knowledge basefrom the contributions of many thousands of people across the Web. It has been active from 1999 to 2016. Since its founding, it has accumulated more than a million English facts from over 15,000 contributors in addition to knowledge bases in other languages. Much of OMCS's software is built on three interconnected representations: the natural language corpus that people interact with directly, a semantic network built from this corpus calledConceptNet, and a matrix-based representation of ConceptNet calledAnalogySpacethat can infer new knowledge usingdimensionality reduction.[1]The knowledge collected by Open Mind Common Sense has enabled research projects at MIT and elsewhere. The project was the brainchild ofMarvin Minsky, Push Singh,Catherine Havasi, and others. Development work began in September 1999, and the project opened to the Internet a year later. Havasi described it in her dissertation as "an attempt to ... harness some of the distributed human computing power of the Internet, an idea which was then only in its early stages."[2]The original OMCS was influenced by the websiteEverything2and its predecessor, and presents a minimalist interface that is inspired byGoogle. Push Singh would have become a professor at theMIT Media Laband lead the Common Sense Computing group in 2007, but committed suicide on February 28, 2006.[3] The project is currently run by the Digital Intuition Group at the MIT Media Lab under Catherine Havasi.[citation needed] There are many different types of knowledge in OMCS. Some statements convey relationships between objects or events, expressed as simple phrases of natural language: some examples include "A coat is used for keeping warm", "The sun is very hot", and "The last thing you do when you cook dinner is wash your dishes". The database also contains information on the emotional content of situations, in such statements as "Spending time with friends causes happiness" and "Getting into a car wreck makes one angry". OMCS contains information on people's desires and goals, both large and small, such as "People want to be respected" and "People want good coffee".[1] Originally, these statements could be entered into the Web site as unconstrained sentences of text, which had to be parsed later. The current version ofthe Web sitecollects knowledge only using more structured fill-in-the-blank templates. OMCS also makes use of data collected by theGame With a Purpose"Verbosity".[4] In its native form, the OMCS database is simply a collection of these short sentences that convey some common knowledge. In order to use this knowledge computationally, it has to be transformed into a more structured representation. ConceptNet is asemantic networkbased on the information in the OMCS database. ConceptNet is expressed as a directed graph whose nodes are concepts, and whose edges are assertions of common sense about these concepts. Concepts represent sets of closely related natural language phrases, which could be noun phrases, verb phrases, adjective phrases, or clauses.[5] ConceptNet is created from the natural-language assertions in OMCS by matching them against patterns using a shallow parser. Assertions are expressed as relations between two concepts, selected from a limited set of possible relations. The various relations represent common sentence patterns found in the OMCS corpus, and in particular, every "fill-in-the-blanks" template used on the knowledge-collection Web site is associated with a particular relation.[5] The data structures that make up ConceptNet were significantly reorganized in 2007, and published as ConceptNet 3.[5]The Software Agents group currently distributes a database and API for the new version 4.0.[6] In 2010, OMCS co-founder and director Catherine Havasi, with Robyn Speer, Dennis Clark and Jason Alonso, createdLuminoso, a text analytics software company that builds on ConceptNet.[7][8][9][10]It uses ConceptNet as its primary lexical resource in order to help businesses make sense of and derive insight from vast amounts of qualitative data, including surveys, product reviews and social media.[7][11][12] The information in ConceptNet can be used as a basis formachine learningalgorithms. One representation, called AnalogySpace, usessingular value decompositionto generalize and represent patterns in the knowledge in ConceptNet, in a way that can be used in AI applications. Its creators distribute a Python machine learning toolkit called Divisi[13]for performing machine learning based ontext corpora, structured knowledge bases such as ConceptNet, and combinations of the two. Other similar projects includeNever-Ending Language Learning,Mindpixel(discontinued),Cyc, Learner, SenticNet,Freebase,YAGO,DBpedia, and Open Mind 1001 Questions, which have explored alternative approaches to collecting knowledge and providing incentive for participation. The Open Mind Common Sense project differs from Cyc because it has focused on representing the common sense knowledge it collected as English sentences, rather than using a formal logical structure. ConceptNet is described by one of its creators, Hugo Liu, as being structured more likeWordNetthan Cyc, due to its "emphasis on informal conceptual-connectedness over formal linguistic-rigor".[14]
https://en.wikipedia.org/wiki/Open_Mind_Common_Sense
Dynamic functional connectivity(DFC) refers to the observed phenomenon thatfunctional connectivitychanges over a short time. Dynamic functional connectivity is a recent expansion on traditional functional connectivity analysis which typically assumes that functional networks are static in time. DFC is related to a variety of different neurological disorders, and has been suggested to be a more accurate representation of functional brain networks. The primary tool for analyzing DFC isfMRI, but DFC has also been observed with several other mediums. DFC is a recent development within the field offunctional neuroimagingwhose discovery was motivated by the observation of temporal variability in the rising field of steady state connectivity research. Functional connectivity refers to the functionally integrated relationship between spatially separated brain regions. Unlikestructural connectivitywhich looks for physical connections in the brain, functional connectivity is related to similar patterns of activation in different brain regions regardless of the apparent physical connectedness of the regions.[1]This type of connectivity was discovered in the mid-1990s and has been seen primarily usingfMRIandPositron emission tomography.[2]Functional connectivity is usually measured duringresting state fMRIand is typically analyzed in terms of correlation,coherence, and spatial grouping based on temporal similarities.[3]These methods have been used to show that functional connectivity is related to behavior in a variety of different tasks, and that it has a neural basis. These methods assume the functional connections in the brain remain constant in a short time over a task or period of data collection. Studies that showed brain state dependent changes in functional connectivity were the first indicators that temporal variation in functional connectivity may be significant. Several studies in the mid-2000s examined the changes in FC that were related to a variety of different causes such as mental tasks,[4]sleep,[5]and learning.[6]These changes often occur within the same individual and are clearly relevant to behavior. DFC has now been investigated in a variety of different contexts with many analysis tools. It has been shown to be related to both behavior and neural activity. Some researchers believe that it may be heavily related to high level thought or consciousness.[3] Because DFC is such a new field, much of the research related to it is conducted to validate the relevance of these dynamic changes rather than explore their implications; however, many critical findings have been made that help the scientific community better understand the brain. Analysis of dynamic functional connectivity has shown that far from being completely static, the functional networks of the brain fluctuate on the scale of seconds to minutes. These changes are generally seen as movements from one short term state to another, rather than continuous shifts.[3]Many studies have shown reproducible patterns of network activity that move throughout the brain. These patterns have been seen in both animals and humans, and are present at only certain points during a scanner session.[7]In addition to showing transient brain states, DFC analysis has shown a distinct hierarchical organization of the networks of the brain. Connectivity between bilaterally symmetric regions is the most stable form of connectivity in the brain, followed by other regions with direct anatomical connections. Steady state functional connectivity networks exist and havephysiological relevance, but have less temporal stability than the anatomical networks. Finally, some functional networks are fleeting enough to only be seen with DFC analysis. These networks also possess physiological relevance but are much less temporally stable than the other networks in the brain.[8] Sliding window analysis is the most common method used in the analysis of functional connectivity, first introduced by Sakoglu and Calhoun in 2009, and applied to schizophrenia.[9][10][11][12]Sliding window analysis is performed by conducting analysis on a set number of scans in an fMRI session. The number of scans is the length of the sliding window. The defined window is then moved a certain number of scans forward in time and additional analysis is performed. The movement of the window is usually referenced in terms of the degree of overlap between adjacent windows. One of the principle benefits of sliding window analysis is that almost any steady state analysis can also be performed using sliding window if the window length is sufficiently large. Sliding window analysis also has a benefit of being easy to understand and in some ways easier to interpret.[3]As the most common method of analysis, sliding window analysis has been used in many different ways to investigate a variety of different characteristics and implications of DFC. In order to be accurately interpreted, data from sliding window analysis generally must be compared between two different groups. Researchers have used this type of analysis to show different DFC characteristics in diseased and healthy patients, high and low performers on cognitive tasks, and between large scale brain states. One of the first methods ever used to analyze DFC was pattern analysis of fMRI images to show that there are patterns of activation in spatially separated brain regions that tend to have synchronous activity. It has become clear that there is a spatial and temporal periodicity in the brain that probably reflects some of the constant processes of the brain. Repeating patterns of network information have been suggested to account for 25–50% of the variance in fMRI BOLD data.[7][13]These patterns of activity have primarily been seen in rats as a propagating wave of synchronized activity along the cortex. These waves have also been shown to be related to underlying neural activity, and has been shown to be present in humans as well as rats.[7] Departing from the traditional approaches, recently an efficient method was introduced to analyze rapidly changing functional activations patterns which transforms the fMRI BOLD data into a point process.[14][15]This is achieved by selecting for each voxel the points of inflection of the BOLD signal (i.e., the peaks). These few points contain a great portion of the information pertaining functional connectivity, because it has been demonstrated, that despite the tremendous reduction on the data size (> 95%), it compares very well with inferences of functional connectivity[16][17]obtained with standard methods which uses the full signal. The large information content of these few points is consistent with the results of Petridou et al.[18]who demonstrated he contribution of these "spontaneous events" to the correlation strength and power spectra of the slow spontaneous fluctuations by deconvolving the task hemodynamic response function from the rest data. Subsequently, similar principles were successfully applied under the name of co-activation patterns (CAP).[19][20][21] Time-frequency analysishas been proposed as an analysis method that is capable of overcoming many of the challenges associated with sliding windows. Unlike sliding window analysis, time frequency analysis allows the researcher to investigate both frequency and amplitude information simultaneously. Thewavelet transformhas been used to conduct DFC analysis that has validated the existence of DFC by showing its significant changes in time. This same method has recently been used to investigate some of the dynamic characteristics of accepted networks. For example, time frequency analysis has shown that the anticorrelation between thedefault mode networkand thetask-positive networkis not constant in time but rather is a temporary state.[22]Independent component analysishas become one of the most common methods of network generation in steady state functional connectivity. ICA divides fMRI signal into several spatial components that have similar temporal patterns. More recently, ICA has been used to divide fMRI data into different temporal components. This has been termed temporal ICA and it has been used to plot network behavior that accounts for 25% of variability in the correlation of anatomical nodes in fMRI.[23] Several researchers have argued that DFC may be a simple reflection of analysis, scanner, or physiological noise. Noise in fMRI can arise from a variety of different factors including heart beat, changes in the blood brain barrier, characteristics of the acquiring scanner, or unintended effects of analysis. Some researchers have proposed that the variability in functional connectivity in fMRI studies is consistent with the variability that one would expect from simply analyzing random data. This complaint that DFC may reflect only noise has been recently lessened by the observation of electrical basis to fMRI DFC data and behavioral relevance of DFC characteristics.[3] In addition to complaints that DFC may be a product of scanner noise, observed DFC could be criticized based on the indirect nature of fMRI which is used to observe it. fMRI data is collected by quickly acquiring a sequence of MRI images in time using echo planar imaging. The contrast in these images is heavily influenced by the ratio of oxygenated and deoxygenated blood. Since active neurons require more energy than resting neurons, changes in this contrast is traditionally interpreted an indirect measure of neural activity. Because of its indirect nature, fMRI data in DFC studies could be criticized as potentially being a reflection of non neural information. This concern has been alleviated recently by the observed correlation between fMRI DFC and simultaneously acquired electrophysiology data.[24]Battaglia and colleagues have tried to address those controversies, linking dynamic functional connectivity to causality or effective connectivity. The scientists claim indeed that dynamic effective connectivity can emerge from transitions in the collective organization of coherent neural activity.[25] fMRI is the primary means of investigating DFC. This presents unique challenges because fMRI has fairly low temporal resolution, typically 0.5 Hz, and is only an indirect measure of neural activity. The indirect nature of fMRI analysis suggests that validation is needed to show that findings from fMRI are actually relevant and reflective of neural activity. Correlation between DFC and electrophysiology has led some scientists to suggest that DFC could reflect hemodynamic results of dynamic network behavior that has been seen in single cell analysis of neuron populations. Although hemodynamic response is too slow to reflect a one-to-one correspondence with neural network dynamics, it is plausible that DFC is a reflection of the power of some frequencies of electrophysiology data.[3] Electroencephalography(EEG) has also been used in humans to both validate and interpret observations made in DFC. EEG has poor spatial resolution because it is only able to acquire data on the surface of the scalp, but it is reflective of broad electrical activity from many neurons. EEG has been used simultaneously with fMRI to account for some of the inter scan variance in FC. EEG has also been used to show that changes in FC are related to broad brain states observed in EEG.[26][27][28][29] Magnetoencephalography(MEG) can be used to measure the magnetic fields produced by electrical activity in the brain. MEG has high temporal resolution and has generally higher spatial resolution than EEG. Resting state studies with MEG are still limited by spatial resolution, but the modality has been used to show that resting state networks move through periods of low and high levels of correlation. This observation is consistent with the results seen in other DFC studies such as DFC activation pattern analysis.[3] Single-unit recording were used in order to explore the extent, strength and plasticity of functional connectivity between individual cortical neurons in cats and monkeys. Such studies revealed correlated activity at various time scales. At the fastest time scale, that of 1 – 20 ms, correlation coefficients were typically < 0.05.[30][31]These functional connections were found to be plastic – changing the correlation for a conditioning period of Ts (typically a few minutes), by means of spike-triggered sensory stimulations, induced short-term (typically < Ts) lasting changes of the connections. The pre-post conditioning strengthening of a functional connection was typically equal to the square root of its pre-during conditioning strengthening.[32] Dynamic Functional Connectivity studied using fMRI may be related to a phenomenon previously discovered in macaque prefrontal cortex termed Dynamic Network Connectivity, whereby arousal mechanisms rapidly alter the strength of glutamate synaptic connections onto dendritic spines by opening or closing potassium channels on spines, thus weakening or strengthening connectivity, respectively.[33][34]For example, dopamine D1 receptor and/or noradrenergic beta-1 receptor stimulation on spines can increase cAMP-PKA-calcium signaling to open HCN, KCNQ2, and/or SK channels to rapidly weaken a connection, e.g. as occurs during stress.[35] DFC has been shown to be significantly related to human performance, including vigilance and aspects of attention. It has been proposed and supported that the network behavior immediately prior to a task onset is a strong predictor of performance on that task. Traditionally, fMRI studies have focused on the magnitude of activation in brain regions as a predictor of performance, but recent research has shown that correlation between networks as measured with sliding window analysis is an even stronger predictor of performance.[24]Individual differences in functional connectivity variability (FCV) across sliding windows within fMRI scans have been shown to correlate with the tendency to attend to pain.[36]The degree to which a subject is mind wandering away from a sensory stimulus has also been related to FCV.[37] One of the principal motivations of DFC analysis is to better understand, detect and treat neurological diseases. Static functional connectivity has been shown to be significantly related to a variety of diseases such asdepression,schizophrenia, andAlzheimer's disease. Because of the newness of the field, DFC has only recently been used to investigate disease states, but since 2012 each of these three diseases has been shown to be correlated to dynamic temporal characteristics in functional connectivity. Most of these differences are related to the amount of time that is spent in different transient states. Patients with Schizophrenia have less frequent state changes than healthy patients, and this result has led to the suggestion that the disease is related to patients being stuck in certain brain states where the brain is unable to respond quickly to different queues.[38]Also, a study in the visual sensory network showed that schizophrenia subjects spent more time than the healthy subjects in a state in which the connectivity between the middle temporal gyrus and other regions of the visual sensory network is highly negative.[39]Studies with Alzheimer's disease have shown that patients with this ailment have altered network connectivity as well as altered time spent in the networks that are present.[40]The observed correlation between DFC and disease does not imply that the changes in DFC are the cause of any of these diseases, but information from DFC analysis may be used to better understand the effects of the disease and to more quickly and accurately diagnose them.
https://en.wikipedia.org/wiki/Dynamic_functional_connectivity
Functional connectivitysoftware is used to study functional properties of theconnectomeusingfunctional Magnetic Resonance Imaging (fMRI)data in theresting stateand during tasks. To access many of these software applications visit theNIHfundedNeuroimaging Informatics Tools and Resources Clearinghouse (NITRC)site.
https://en.wikipedia.org/wiki/List_of_functional_connectivity_software
TheHuman Connectome Project(HCP) was a five-year project (later extended to 10 years) sponsored by sixteen components of theNational Institutes of Health, split between two consortia of research institutions. The project was launched in July 2009[1]as the first of three Grand Challenges of the NIH's Blueprint for Neuroscience Research.[2]On September 15, 2010, the NIH announced that it would award two grants: $30 million over five years to a consortium led byWashington University in St. Louisand theUniversity of Minnesota, with strong contributions fromUniversity of Oxford(FMRIB) and $8.5 million over three years to a consortium led byHarvard University,Massachusetts General Hospitaland theUniversity of California Los Angeles.[3] The goal of the Human Connectome Project was to build a "network map" (connectome) that sheds light on the anatomical and functional connectivity within the healthyhuman brain, as well as to produce a body of data that will facilitate research intobrain disorderssuch asdyslexia,autism,Alzheimer's disease, andschizophrenia.[4][5] A number of successor projects are currently in progress, based on the Human Connectome Project results.[6] The WU-Minn-Oxford consortium developed improved MRI instrumentation, image acquisition and image analysis methods for mapping the connectivity in the human brain at spatial resolutions significantly better than previously available; using these methods, WU-Minn-Oxford consortium collected a large amount of MRI and behavioral data on 1,200 healthy adults — twin pairs and their siblings from 300 families - using a special 3 Tesla MRI instrument. In addition, it scanned 184 subjects from this pool at 7 Tesla, with higher spatial resolution. The data were analyzed to show the anatomical and functional connections between parts of the brain for each individual, and were related to behavioral test data. Comparing theconnectomesand genetic data of geneticallyidentical twinswith fraternal twins revealed the relative contributions of genes and environment in shaping brain circuitry and pinpointed relevantgenetic variation. The maps also shed light on how brain networks are organized. Using a combination ofnon-invasiveimaging technologies, includingresting-state fMRIand task-basedfunctional MRI,MEGandEEG, anddiffusion MRI, the WU-Minn mappedconnectomesat the macro scale —mapping large brain systemsthat were divided into anatomically and functionally distinct areas, rather than mapping individualneurons. Dozens of investigators and researchers from nine institutions contributed to this project. Research institutions include: Washington University in St. Louis, the Center for Magnetic Resonance Research at theUniversity of Minnesota,University of Oxford,Saint Louis University,Indiana University,D'Annunzio University of Chieti–Pescara,Ernst Strungmann Institute,Warwick University, Advanced MRI Technologies, and theUniversity of California at Berkeley.[7] The data that resulted from this research is publicly available in an open-source web-accessible neuroinformatics platform.[8][9] The MGH/Harvard-UCLA consortium focussed on optimizing MRI technology for imaging the brain's structural connections usingdiffusion MRI, with a goal of increasingspatial resolution, quality, and speed. Diffusion MRI, employed in both projects, maps the brain's fibrous long-distance connections by tracking the motion of water. Waterdiffusionpatterns in different types of cells allow the detection of different types of tissues. Using this imaging method, the long extensions of neurons, calledwhite matter, can be seen in sharp relief.[10][11] The new scanner built at the MGHMartinos Centerfor this project was "4 to 8 times as powerful as conventional systems, enabling imaging of humanneuroanatomywith greater sensitivity than was previously possible."[3]The scanner has a maximum gradient strength of 300 mT/m and aslew rateof 200T/m/s, with b-values tested up to 20,000 s/mm^2. For comparison, a standard gradient coil is 45 mT/m.[12][13][14] To understand the relationship between brain connectivity and behavior better, the Human Connectome Project used a reliable and well-validated battery of measures that assess a wide range of human functions. The core of its battery is the tools and methods developed by theNIH Toolboxfor Assessment of Neurological and Behavioral function.[15] The Human Connectome Project has grown into a large group of research teams. These teams make use of the style of brain scanning developed by the Project.[16]The studies usually include using large groups of participants, scanning many angles of participants' brains, and carefully documenting the location of the structures in each participant's brain.[17]Studies affiliated with the Human Connectome Project are currently cataloged by the Connectome Coordination Facility. The studies fall into three categories: Healthy Adult Connectomes, Lifespan Connectome Data, and Connectomes Related to Human Disease. Under each of these categories are research groups working on specific questions. The Human Connectome Project Young Adult study[18]made data on the brain connections of 1100 healthy young adults available to the scientific community.[19]Scientists have used data from the study to support theories about which areas of the brain communicate with one another.[20]For example, one study used data from the project to show that theamygdala, a part of the brain essential for emotional processing, is connected to the parts of the brain that receive information from the senses and plan movement.[21]Another study showed that healthy individuals who had a high tendency to experience anxious or depressed mood had fewer connections between the amygdala and a number of brain areas related to attention. There are currently four research groups collecting data on connections in the brains of populations other than young adults. The purpose of these groups is to determine ordinary brain connectivity during infancy, childhood, adolescence, and aging. Scientists will use the data from these research groups in the same manner in which they have used data from the Human Connectome Project Young Adult study.[22] Fourteen research groups investigate how connections in the brain change during the course of a particular disease. Four of the groups focus onAlzheimer's diseaseordementia. Alzheimer's disease and dementia are diseases that begin during aging. Memory loss and cognitive impairment mark the progression of these diseases. While scientists consider Alzheimer's disease to be a disease with a specific cause, dementia actually describes symptoms which could be attributed to a number of causes. Two other research groups investigate how diseases that disrupt vision change connectivity in the brain. Another four of the research groups focus onanxiety disordersandmajor depressive disorder, psychological disorders that result in abnormal emotional regulation. Two more of the research groups focus on the effects ofpsychosis, a symptom of some psychological disorders in which an individual perceives reality differently than others do. One of the teams researchesepilepsy, a disease characterized by seizures. Finally, one research team is documenting the brain connections of theAmishpeople, a religious and ethnic group that has high rates of somepsychological disorders.[23] Although theories have been put forth about the way brain connections change in the diseases under investigation, many of these theories have been supported by data from healthy populations.[21]For example, an analysis of the brains of healthy individuals supported the theory that individuals with anxiety disorders and depression have less connectivity between their emotional centers and the areas that govern attention. By collecting data specifically from individuals with these diseases, researchers hope to have a more certain idea of how brain connections in these individuals change over time. The project was completed in 2021.[24]and a retrospective analysis is available.[25]A number of new projects have started based on the results.[6]
https://en.wikipedia.org/wiki/Human_Connectome_Project
TheBudapest Reference Connectomeserver computes the frequently appearing anatomical brain connections of 418 healthy subjects.[1][2]It has been prepared fromdiffusion MRIdatasets of theHuman Connectome Projectinto a reference connectome (or braingraph), which can be downloaded in CSV and GraphML formats and visualized on the site in 3D. The Budapest Reference Connectome has 1015 nodes, corresponding to anatomically identified gray matter areas. The user can set numerous parameters and the resulting consensus connectome is readily visualized on the webpage.[2]Users can zoom, rotate, and query the anatomical label of the nodes on the graphical component. Budapest Reference Connectome is a consensus graph of the brain graphs of 96 subjects in Version 2 and 418 subjects in Version 3. Only those edges are returned which are present in a given percentage of the subjects. Each of the selected edges has a certain weight in each of the graphs containing that edge, so these multiple weights are combined into a single weight, by taking either their mean (i.e., average) or median. The user interface allows the customization of these parameters: the user can select the minimum frequency of the edges returned. There is an option for viewing and comparing the female or male reference connectomes. The connectomes of women contain significantly more edges than those of men, and a larger portion of the edges in the connectomes of women run between the two hemispheres.[3][4][5] The Budapest Reference Connectome has led the researchers to the discovery of the Consensus Connectome Dynamics of the humanbrain graphs. The edges appeared in all of the brain graphs form a connected subgraph around thebrainstem. By allowing gradually less frequent edges, this core subgraph grows continuously, as ashrub. The growth dynamics may reflect the individualbrain developmentand provide an opportunity to direct some edges of the human consensus brain graph.[6]
https://en.wikipedia.org/wiki/Budapest_Reference_Connectome
ADrosophilaconnectomeis a list ofneuronsin theDrosophila melanogaster(fruit fly) nervous system, and the chemicalsynapsesbetween them. The fly's central nervous system consists of the brain plus theventral nerve cord, and both are known to differ considerably between male and female.[1][2]Dense connectomes have been completed for the female adult brain,[3]the male[4]and female[5]nerve cords, and the female larval stage.[6]The available connectomes show only chemical synapses - other forms of inter-neuron communication such asgap junctionsorneuromodulatorsare not represented.Drosophilais the most complex creature with a connectome, which had only been previously obtained for three other simpler organisms, firstC. elegans.[7]The connectomes have been obtained by the methods ofneural circuit reconstruction, which over the course of many years worked up through various subsets of the fly brain to current efforts aimed at a unified central brain and VNC connectome, for both male and female flies.[8][9] Connectomeresearch (connectomics) has a number of competing objectives. On the one hand, investigators prefer an organism small enough that the connectome can be obtained in a reasonable amount of time. This argues for a small creature. On the other hand, one of the main uses of a connectome is to relate structure and behavior, so an animal with a large behavioral repertoire is desirable. It's also very helpful to use an animal with a large existing community of experimentalists, and many available genetic tools.Drosophilameets all of these requirements: Synapses in theDrosophilaarepolyadic,[16]meaning they have multiple post-synaptic elements (commonly call PSDs, for post-synaptic densities) opposed to one pre-synaptic element (commonly called a T-bar, due to its most common appearance). Synapse counts can be reported either way - as number of structures, or number of partners. Cell and synapse counts are known to vary between individuals.[17] For the larva, there is one full female connectome available. For adults, a full connectomes of the female brain (~120,000 neurons, ~30,000,000 synapses)[18][19][3]and both the male and female ventral nerve cord (VNC, the fly's equivalent of the spinal cord, ~14,600 neurons)[20][21]are also available. At least two teams are working on complete adult CNS connectomes that includes both the brain and the VNC, in both male and female flies.[22][23] Drosophilaconnectomics started in 1991 with a description of the circuits of thelamina.[24]However the methods used were largely manual and further progress awaited more automated techniques. In 2011, a high-level connectome, at the level of brain compartments and interconnecting tracts of neurons, for the full fly brain was published,[25]and is available online.[26]New techniques such as digital image processing began to be applied to detailed neural reconstruction.[27] Reconstructions of larger regions soon followed, including a column of themedulla,[28]also in the visual system of the fruit fly, and the alpha lobe of the mushroom body.[29] In 2020, a dense connectome of half the central brain ofDrosophilawas released,[30]along with a web site that allows queries and exploration of this data.[31][32]The methods used in reconstruction and initial analysis of the 'hemibrain' connectome followed. This effort was a collaboration between the Janelia FlyEM team andGoogle.[19][33]This dataset is an incomplete but large section of the fly central brain. It was collected usingfocused ion beamscanning electron microscopy(FIB-SEM) which generated an 8 nm isotropic dataset, then automatically segmented using aflood-fillingnetwork before being manually proofread by a team of experts. Finally, estimatedneurotransmitterIDs were added.[34] In 2017, a full adult fly brain (FAFB) volume was imaged by a team atJanelia Research Campususing a novel high-throughput serial sectiontransmission electron microscopy(ssTEM) pipeline.[35]At the time, however, automated methods could not cope with its reconstruction, but the volume was available for sparse tracing of selected circuits.[36]Six years later, in 2023,Sebastian Seung’s lab atPrincetonusedconvolutional neural networks(CNNs) to automatically segment neurons, while Jan Funke's lab at Janelia used similar techniques to detect pre- and post-synaptic sites.[37]This automated version was then used as a starting point for a massive community effort among fly neuroscientists to proofread neuronal morphologies by correcting errors and adding information about cell type and other attributes.[38]This effort, called FlyWire, was conducted by Sebastian Seung andMala Murthyof thePrinceton Neuroscience Institutein conjunction with a large team of other scientists and labs called the FlyWire Consortium.[38][39]The full brain connectome produced by this effort is now publicly available and searchable through the FlyWire Codex.[40][41]This full brain connectome (of a female) contains roughly 5x10^7 chemical synapses between ~130,000 neurons.[42]EstimatedneurotransmitterIDs were added, again using techniques from the Funke lab.[34]Aprojectome, a map of projections between regions, can be derived from the connectome. Members of the fly connectomics community have made an effort to match cell types between FlyWire and the Hemibrain. They have found that at first pass, 61% of Hemibrain types are found in the FlyWire dataset and, out of these consensus cell types, 53% of “edges” from one cell type to another can be found in both datasets (but edges connected by at least 10 synapses are much more consistently found across datasets).[43]In parallel, a consensuscell typeatlas for theDrosophilabrain was published, produced based on this 'FlyWire' connectome and the prior 'hemibrain'.[44]This resource includes 4,552 cell types: 3,094 as rigorous validations of those previously proposed in the hemibrain connectome; 1,458 new cell types, arising mostly from the fact that the FlyWire connectome spans the whole brain, whereas the hemibrain derives from a subvolume. Comparison of these distinct, adultDrosophilaconnectomes showed that cell type counts and strong connections were largely stable, but connection weights were surprisingly variable within and across animals. There are two publicly available datasets of the adult flyventral nerve cord(VNC). The female adult nerve cord (FANC) was collected using high-throughput ssTEM by Wei-Chung Allen Lee’s lab atHarvard Medical School.[45]It then underwent automatic segmentation and synapse prediction using CNNs, and researchers at Harvard and theUniversity of Washingtonmapped motor neurons with cell bodies in the VNC to their muscular targets by cross-referencing between the EM dataset, a high-resolution nanotomography image volume of the fly leg, and sparse genetic lines to label individual neurons withfluorescent proteins.[46]The rest of the FANC was reconstructed by 2024.[5] The male adult nerve cord (MANC) was collected and segmented at Janelia using FIB-SEM and flood-filling network protocols modified from the Hemibrain pipeline.[47]In a collaboration between researchers at Janelia, Google, theUniversity of Cambridge, and theMRC Laboratory of Molecular Biology(LMB), it is fully proofread and annotated with cell types and other properties (in particular predicted neurotransmitter identities[48]), and searchable on neuPrint.[49] The connectome of a completecentral nervous system(connected brain and VNC) of a 1stinstarD. melanogasterlarvahas been reconstructed as a single dataset of 3016 neurons.[6][50][51][52]The imaging was done at Janelia using serial section electron microscopy.[6]This dataset was segmented and annotated manually using CATMAID by a team of people mainly led by researchers at Janelia, Cambridge, and the MRC LMB.[53]They found that the larval brain was composed of 3,016 neurons and 548,000 synapses. 93% of brain neurons had a homolog in the opposite hemisphere. Of the synapses, 66.6% were axo-dendritic, 25.8% were axo-axonic, 5.8% were dendro-dendritic, and 1.8% were dendro-axonic. To study the connectome, they treated it as a directed graph with the neurons forming nodes and the synapses forming the edges. Using this representation, Winding et al found that the larval brain neurons could be clustered into 93 different types, based on connectivity alone. These types aligned with the known neural groups includingsensory neurons(visual, olfactory, gustatory, thermal, etc),descending neurons, andascending neurons. The authors ordered these neuron types based on proximity to brain inputs vs brain outputs. Using this ordering, they could quantify the proportion of recurrent connections, as the set of connections going from neurons closer to outputs towards inputs. They found that 41% of all brain neurons formed a recurrent connection. The neuron types with the most recurrent connections were thedopaminergic neurons(57%),mushroom bodyfeedback neurons (51%),mushroom bodyoutput neurons (45%), andconvergence neurons(42%) (receiving input from mushroom body and lateral horn regions). These neurons, implicated in learning, memory, and action-selection, form a set of recurrent loops. One of the main uses of theDrosophilaconnectome is to understand the neural circuits and other brain structure that gives rise to behavior. This area is under very active investigation.[54][55]For example, the fruit fly connectome has been used to identify an area of the fruit fly brain that is involved in odor detection and tracking. Flies choose a direction in turbulent conditions by combining information about the direction of air flow and the movement of odor packets. Based on the fly connectome, processing must occur in the “fan-shaped body” where wind-sensing neurons and olfactory direction-sensing neurons cross.[56][57] A natural question is whether the connectome will allow simulation of the fly's behavior. However, the connectome alone is not sufficient. A comprehensive simulation would need to includegap junctionvarieties and locations, identities ofneurotransmitters,receptortypes and locations,neuromodulatorsandhormones(with sources and receptors), the role ofglial cells,time evolution rulesfor synapses, and more.[58][59]However some pathways have been simulated using only the connectome plus neurotransmitter predictions.[60]
https://en.wikipedia.org/wiki/Drosophila_connectome
Biomimeticsorbiomimicryis the emulation of the models, systems, and elements of nature for the purpose of solving complexhumanproblems.[2][3][4]The terms "biomimetics" and "biomimicry" are derived fromAncient Greek:βίος(bios), life, and μίμησις (mīmēsis), imitation, from μιμεῖσθαι (mīmeisthai), to imitate, from μῖμος (mimos), actor. A closely related field isbionics.[5] Nature has gone throughevolutionover the 3.8 billion years since life is estimated to have appeared on the Earth.[6]It has evolved species with high performance using commonly found materials. Surfaces of solids interact with other surfaces and the environment and derive the properties of materials. Biological materials are highly organized from the molecular to the nano-, micro-, and macroscales, often in a hierarchical manner with intricate nanoarchitecture that ultimately makes up a myriad of different functional elements.[7]Properties of materials and surfaces result from a complex interplay between surface structure and morphology and physical and chemical properties. Many materials, surfaces, and objects in general provide multifunctionality. Various materials, structures, and devices have been fabricated for commercial interest by engineers,material scientists, chemists, and biologists, and for beauty, structure, and design by artists and architects. Nature has solved engineering problems such as self-healing abilities, environmental exposure tolerance and resistance,hydrophobicity, self-assembly, and harnessingsolar energy. Economic impact of bioinspired materials and surfaces is significant, on the order of several hundred billion dollars per year worldwide. One of the early examples of biomimicry was the study ofbirdsto enablehuman flight. Although never successful in creating a "flying machine",Leonardo da Vinci(1452–1519) was a keen observer of theanatomyand flight of birds, and made numerous notes and sketches on his observations as well as sketches of "flying machines".[8]TheWright Brothers, who succeeded in flying the first heavier-than-air aircraft in 1903, allegedly derived inspiration from observations of pigeons in flight.[9] During the 1950s, the AmericanbiophysicistandpolymathOtto Schmittdeveloped the concept of "biomimetics".[3]During his doctoral research, he developed theSchmitt triggerby studying the nerves in squid, attempting to engineer a device that replicated the biological system ofnerve propagation.[10]He continued to focus on devices that mimic natural systems and by 1957 he had perceived a converse to the standard view ofbiophysicsat that time, a view he would come to call biomimetics.[3] Biophysics is not so much a subject matter as it is a point of view. It is an approach to problems of biological science utilizing the theory and technology of the physical sciences. Conversely, biophysics is also a biologist's approach to problems of physical science and engineering, although this aspect has largely been neglected. In 1960,Jack E. Steelecoined a similar term,bionics, atWright-Patterson Air Force Basein Dayton, Ohio, where Otto Schmitt also worked. Steele defined bionics as "the science of systems which have some function copied from nature, or which represent characteristics of natural systems or their analogues".[5][12]During a later meeting in 1963, Schmitt stated: Let us consider what bionics has come to mean operationally and what it or some word like it (I prefer biomimetics) ought to mean in order to make good use of the technical skills of scientists specializing, or rather, I should say, despecializing into this area of research. In 1969, Schmitt used the term "biomimetic" in the title one of his papers,[13]and by 1974 it had found its way intoWebster's Dictionary. Bionics entered the same dictionary earlier in 1960 as "a science concerned with the application of data about the functioning of biological systems to the solution of engineering problems". Bionic took on a different connotation whenMartin Caidinreferenced Jack Steele and his work in the novelCyborg, which later resulted in the 1974 television seriesThe Six Million Dollar Manand its spin-offs. The term bionic then became associated with "the use of electronically operated artificial body parts" and "having ordinary human powers increased by or as if by the aid of such devices".[14]Because the termbionictook on the implication of supernatural strength, the scientific community in English speaking countries largely abandoned it.[12] The termbiomimicryappeared as early as 1982.[15]Biomimicry was popularized by scientist and authorJanine Benyusin her 1997 bookBiomimicry: Innovation Inspired by Nature. Biomimicry is defined in the book as a "new science that studies nature's models and then imitates or takes inspiration from these designs and processes to solve human problems". Benyus suggests looking to Nature as a "Model, Measure, and Mentor" and emphasizes sustainability as an objective of biomimicry.[16] The potential long-term impacts of biomimicry were quantified in a 2013 Fermanian Business & Economic Institute Report commissioned by theSan Diego Zoo. The findings demonstrated the potential economic and environmental benefits of biomimicry, which can be further seen in Johannes-Paul Fladerer and Ernst Kurzmann's "managemANT" approach.[17]This term (a combination of the words "management" and "ant"), describes the usage of behavioural strategies of ants in economic and management strategies.[18][19] Biomimetics could in principle be applied in many fields. Because of the diversity and complexity of biological systems, the number of features that might be imitated is large. Biomimetic applications are at various stages of development from technologies that might become commercially usable to prototypes.[4]Murray's law, which in conventional form determined the optimum diameter of blood vessels, has been re-derived to provide simple equations for the pipe or tube diameter which gives a minimum mass engineering system.[20] Aircraft wingdesign[21]and flight techniques[22]are being inspired by birds and bats. Theaerodynamicsof streamlined design of improved Japanese high speed trainShinkansen500 Serieswere modelled after the beak ofKingfisherbird.[23] Biorobotsbased on the physiology and methods oflocomotion of animalsincludeBionicKangaroowhich moves like a kangaroo, saving energy from one jump and transferring it to its next jump;[24]Kamigami Robots, a children's toy, mimic cockroach locomotion to run quickly and efficiently over indoor and outdoor surfaces,[25]and Pleobot, a shrimp-inspired robot to study metachronal swimming and the ecological impacts of this propulsive gait on the environment.[26] BFRs take inspiration from flying mammals, birds, or insects. BFRs can have flapping wings, which generate the lift and thrust, or they can be propeller actuated. BFRs with flapping wings have increased stroke efficiencies, increased maneuverability, and reduced energy consumption in comparison to propeller actuated BFRs.[27]Mammal and bird inspired BFRs share similar flight characteristics and design considerations. For instance, both mammal and bird inspired BFRs minimizeedge flutteringandpressure-induced wingtip curlby increasing the rigidity of the wing edge and wingtips. Mammal and insect inspired BFRs can be impact resistant, making them useful in cluttered environments. Mammal inspired BFRs typically take inspiration from bats, but the flying squirrel has also inspired a prototype.[28]Examples of bat inspired BFRs include Bat Bot[29]and the DALER.[30]Mammal inspired BFRs can be designed to be multi-modal; therefore, they're capable of both flight and terrestrial movement. To reduce the impact of landing, shock absorbers can be implemented along the wings.[30]Alternatively, the BFR can pitch up and increase the amount of drag it experiences.[28]By increasing the drag force, the BFR will decelerate and minimize the impact upon grounding. Different land gait patterns can also be implemented.[28] Bird inspired BFRs can take inspiration from raptors, gulls, and everything in-between. Bird inspired BFRs can be feathered to increase the angle of attack range over which the prototype can operate before stalling.[31]The wings of bird inspired BFRs allow for in-plane deformation, and the in-plane wing deformation can be adjusted to maximize flight efficiency depending on the flight gait.[31]An example of a raptor inspired BFR is the prototype by Savastano et al.[32]The prototype has fully deformable flapping wings and is capable of carrying a payload of up to 0.8 kg while performing a parabolic climb, steep descent, and rapid recovery. The gull inspired prototype by Grant et al. accurately mimics the elbow and wrist rotation of gulls, and they find that lift generation is maximized when the elbow and wrist deformations are opposite but equal.[33] Insect inspired BFRs typically take inspiration from beetles or dragonflies. An example of a beetle inspired BFR is the prototype by Phan and Park,[34]and a dragonfly inspired BFR is the prototype by Hu et al.[35]The flapping frequency of insect inspired BFRs are much higher than those of other BFRs; this is because of theaerodynamics of insect flight.[36]Insect inspired BFRs are much smaller than those inspired by mammals or birds, so they are more suitable for dense environments. The prototype by Phan and Park took inspiration from the rhinoceros beetle, so it can successfully continue flight even after a collision by deforming its hindwings. Living beings have adapted to a constantly changing environment during evolution through mutation, recombination, and selection.[37]The core idea of the biomimetic philosophy is that nature's inhabitants including animals, plants, and microbes have the most experience in solving problems and have already found the most appropriate ways to last on planet Earth.[38]Similarly, biomimetic architecture seeks solutions for building sustainability present in nature. While nature serves as a model, there are few examples of biomimetic architecture that aim to be nature positive.[39] The 21st century has seen a ubiquitous waste of energy due to inefficient building designs, in addition to the over-utilization of energy during the operational phase of its life cycle.[40]In parallel, recent advancements in fabrication techniques, computational imaging, and simulation tools have opened up new possibilities to mimic nature across different architectural scales.[37]As a result, there has been a rapid growth in devising innovative design approaches and solutions to counter energy problems. Biomimetic architecture is one of these multi-disciplinary approaches tosustainable designthat follows a set of principles rather than stylistic codes, going beyond using nature as inspiration for the aesthetic components of built form but instead seeking to use nature to solve problems of the building's functioning and saving energy. The term biomimetic architecture refers to the study and application of construction principles which are found in natural environments and species, and are translated into the design of sustainable solutions for architecture.[37]Biomimetic architecture uses nature as a model, measure and mentor for providing architectural solutions across scales, which are inspired by natural organisms that have solved similar problems in nature. Using nature as a measure refers to using an ecological standard of measuring sustainability, and efficiency of man-made innovations, while the term mentor refers to learning from natural principles and using biology as an inspirational source.[16] Biomorphic architecture, also referred to as bio-decoration,[37]on the other hand, refers to the use of formal and geometric elements found in nature, as a source of inspiration for aesthetic properties in designed architecture, and may not necessarily have non-physical, or economic functions. A historic example of biomorphic architecture dates back to Egyptian, Greek and Roman cultures, using tree and plant forms in the ornamentation of structural columns.[41] Within biomimetic architecture, two basic procedures can be identified, namely, the bottom-up approach (biology push) and top-down approach (technology pull).[42]The boundary between the two approaches is blurry with the possibility of transition between the two, depending on each individual case. Biomimetic architecture is typically carried out in interdisciplinary teams in which biologists and other natural scientists work in collaboration with engineers, material scientists, architects, designers, mathematicians and computer scientists. In the bottom-up approach, the starting point is a new result from basic biological research promising for biomimetic implementation. For example, developing a biomimetic material system after the quantitative analysis of the mechanical, physical, and chemical properties of a biological system. In the top-down approach, biomimetic innovations are sought for already existing developments that have been successfully established on the market. The cooperation focuses on the improvement or further development of an existing product. Researchers studied thetermite's ability to maintain virtually constant temperature and humidity in theirtermite moundsin Africa despite outside temperatures that vary from 1.5 to 40 °C (34.7 to 104.0 °F). Researchers initially scanned a termite mound and created 3-D images of the mound structure, which revealed construction that could influence humanbuilding design. TheEastgate Centre, a mid-rise office complex inHarare,Zimbabwe,[43]stays cool via a passive cooling architecture that uses only 10% of the energy of a conventional building of the same size. Researchers in theSapienza University of Romewere inspired by the natural ventilation in termite mounds and designed a double façade that significantly cuts down over lit areas in a building. Scientists have imitated the porous nature of mound walls by designing a facade with double panels that was able to reduce heat gained by radiation and increase heat loss by convection in cavity between the two panels. The overall cooling load on the building's energy consumption was reduced by 15%.[44] In 2008, mimicking the mosquito, researchers developed a 3-prong needle that significantly reduced the pain caused by needle insertion, for example when getting an injection. The methodology is improving and science is getting ever closer to the way mosquitoes feed quickly and efficiently. A similar inspiration was drawn from the porous walls of termite mounds to design a naturally ventilated façade with a small ventilation gap. This design of façade is able to induce air flow due to theVenturi effectand continuously circulates rising air in the ventilation slot. Significant transfer of heat between the building's external wall surface and the air flowing over it was observed.[45]The design is coupled withgreeningof the façade. Green wall facilitates additional natural cooling via evaporation, respiration and transpiration in plants. The damp plant substrate further support the cooling effect.[46] Scientists inShanghai Universitywere able to replicate the complex microstructure of clay-made conduit network in the mound to mimic the excellent humidity control in mounds. They proposed a porous humidity control material (HCM) usingsepioliteandcalcium chloridewith water vapor adsorption-desorption content at 550 grams per meter squared. Calcium chloride is adesiccantand improves the water vapor adsorption-desorption property of the Bio-HCM. The proposed bio-HCM has a regime of interfiber mesopores which acts as a mini reservoir. The flexural strength of the proposed material was estimated to be 10.3 MPa using computational simulations.[47][48] In structural engineering, the Swiss Federal Institute of Technology (EPFL) has incorporated biomimetic characteristics in an adaptive deployable "tensegrity" bridge. The bridge can carry out self-diagnosis and self-repair.[49]Thearrangement of leaves on a planthas been adapted for better solar power collection.[50] Analysis of the elastic deformation happening when a pollinator lands on the sheath-like perch part of the flowerStrelitzia reginae(known asbird-of-paradiseflower) has inspired architects and scientists from theUniversity of FreiburgandUniversity of Stuttgartto create hingeless shading systems that can react to their environment. These bio-inspired products are sold under the name Flectofin.[51][52] Other hingeless bioinspired systems include Flectofold.[53]Flectofold has been inspired from the trapping system developed by the carnivorous plantAldrovanda vesiculosa. There is a great need for new structural materials that are light weight but offer exceptional combinations ofstiffness, strength, andtoughness. Such materials would need to be manufactured into bulk materials with complex shapes at high volume and low cost and would serve a variety of fields such as construction, transportation, energy storage and conversion.[54]In a classic design problem, strength and toughness are more likely to be mutually exclusive, i.e., strong materials are brittle and tough materials are weak. However, natural materials with complex and hierarchical material gradients that span fromnano- to macro-scales are both strong and tough. Generally, most natural materials utilize limited chemical components but complex material architectures that give rise to exceptional mechanical properties. Understanding the highly diverse and multi functional biological materials and discovering approaches to replicate such structures will lead to advanced and more efficient technologies.Bone,nacre(abalone shell), teeth, the dactyl clubs of stomatopod shrimps and bamboo are great examples of damage tolerant materials.[55]The exceptional resistance tofractureof bone is due to complex deformation and toughening mechanisms that operate at spanning different size scales — nanoscale structure of protein molecules to macroscopic physiological scale.[56] Nacreexhibits similar mechanical properties however with rather simpler structure. Nacre shows a brick and mortar like structure with thick mineral layer (0.2–0.9 μm) of closely packed aragonite structures and thin organic matrix (~20 nm).[57]While thin films and micrometer sized samples that mimic these structures are already produced, successful production of bulk biomimetic structural materials is yet to be realized. However, numerous processing techniques have been proposed for producing nacre like materials.[55]Pavement cells, epidermal cells on the surface of plant leaves and petals, often form wavy interlocking patterns resembling jigsaw puzzle pieces and are shown to enhance the fracture toughness of leaves, key to plant survival.[58]Their pattern, replicated in laser-engravedPoly(methyl methacrylate)samples, was also demonstrated to lead to increased fracture toughness. It is suggested that the arrangement and patterning of cells play a role in managing crack propagation in tissues.[58] Biomorphic mineralizationis a technique that produces materials with morphologies and structures resembling those of natural living organisms by using bio-structures as templates for mineralization. Compared to other methods of material production, biomorphic mineralization is facile, environmentally benign and economic.[59] Freeze casting(ice templating), an inexpensive method to mimic natural layered structures, was employed by researchers at Lawrence Berkeley National Laboratory to create alumina-Al-Si and IT HAP-epoxy layered composites that match the mechanical properties of bone with an equivalent mineral/organic content.[60]Various further studies[61][62][63][64]also employed similar methods to produce high strength and high toughness composites involving a variety of constituent phases. Recent studies demonstrated production of cohesive and self supporting macroscopic tissue constructs that mimicliving tissuesby printing tens of thousands of heterologous picoliter droplets in software-defined, 3D millimeter-scale geometries.[65]Efforts are also taken up to mimic the design of nacre in artificialcomposite materialsusing fused deposition modelling[66]and the helicoidal structures ofstomatopodclubs in the fabrication of high performancecarbon fiber-epoxy composites.[67] Various established and novel additive manufacturing technologies like PolyJet printing, direct ink writing, 3D magnetic printing, multi-material magnetically assisted 3D printing and magnetically assistedslip castinghave also been utilized to mimic the complex micro-scale architectures of natural materials and provide huge scope for future research.[68][69][70] Spidersilk is tougher thanKevlarused inbulletproof vests.[71]Engineers could in principle use such a material, if it could be reengineered to have a long enough life, for parachute lines, suspension bridge cables, artificial ligaments for medicine, and other purposes.[16]The self-sharpening teeth of many animals have been copied to make better cutting tools.[72] New ceramics that exhibit giant electret hysteresis have also been realized.[73] Neuromorphiccomputers and sensors are electrical devices that copy the structure and function of biological neurons in order to compute. One example of this is theevent camerain which only the pixels that receive a new signal update to a new state. All other pixels do not update until a signal is received.[74] In some biological systems,self-healingoccurs via chemical releases at the site of fracture, which initiate a systemic response to transport repairing agents to the fracture site. This promotes autonomic healing.[75]To demonstrate the use of micro-vascular networks for autonomic healing, researchers developed a microvascular coating–substrate architecture that mimics human skin.[76]Bio-inspired self-healing structural color hydrogels that maintain the stability of an inverse opal structure and its resultant structural colors were developed.[77]A self-repairing membrane inspired by rapid self-sealing processes in plants was developed for inflatable lightweight structures such as rubber boats or Tensairity constructions. The researchers applied a thin soft cellular polyurethane foam coating on the inside of a fabric substrate, which closes the crack if the membrane is punctured with a spike.[78]Self-healing materials,polymersandcomposite materialscapable of mending cracks have been produced based on biological materials.[79] The self-healing properties may also be achieved by the breaking and reforming of hydrogen bonds upon cyclical stress of the material.[80] Surfacesthat recreate the properties ofshark skinare intended to enable more efficient movement through water. Efforts have been made to produce fabric that emulates shark skin.[20][81] Surface tension biomimeticsare being researched for technologies such ashydrophobicorhydrophiliccoatings and microactuators.[82][83][84][85][86] Some amphibians, such as tree andtorrent frogsand arborealsalamanders, are able to attach to and move over wet or even flooded environments without falling. This kind of organisms have toe pads which are permanently wetted by mucus secreted from glands that open into the channels between epidermal cells. They attach to mating surfaces by wet adhesion and they are capable of climbing on wet rocks even when water is flowing over the surface.[4]Tiretreads have also been inspired by the toe pads oftree frogs.[87]3D printed hierarchical surface models, inspired from tree and torrent frogs toe pad design, have been observed to produce better wet traction than conventional tire design.[88] Marinemusselscan stick easily and efficiently to surfaces underwater under the harsh conditions of the ocean. Mussels use strong filaments to adhere to rocks in the inter-tidal zones of wave-swept beaches, preventing them from being swept away in strong sea currents. Mussel foot proteins attach the filaments to rocks, boats and practically any surface in nature including other mussels. These proteins contain a mix ofamino acidresidues which has been adapted specifically foradhesivepurposes. Researchers from the University of California Santa Barbara borrowed and simplified chemistries that the mussel foot uses to overcome this engineering challenge of wet adhesion to create copolyampholytes,[89]and one-component adhesive systems[90]with potential for employment innanofabricationprotocols. Other research has proposed adhesive glue frommussels. Leg attachment pads of several animals, including many insects (e.g.,beetlesandflies),spidersandlizards(e.g.,geckos), are capable of attaching to a variety of surfaces and are used for locomotion, even on vertical walls or across ceilings. Attachment systems in these organisms have similar structures at their terminal elements of contact, known assetae. Such biological examples have offered inspiration in order to produce climbing robots,[citation needed]boots and tape.[91]Synthetic setaehave also been developed for the production of dry adhesives. Superliquiphobicity refers to a remarkable surface property where a solid surface exhibits an extreme aversion to liquids, causing droplets to bead up and roll off almost instantaneously upon contact. This behavior arises from intricate surface textures and interactions at the nanoscale, effectively preventing liquids from wetting or adhering to the surface. The term "superliquiphobic" is derived from "superhydrophobic," which describes surfaces highly resistant to water. Superliquiphobic surfaces go beyond water repellency and display repellent characteristics towards a wide range of liquids, including those with very low surface tension or containing surfactants.[2][92] Superliquiphobicity emerges when a solid surface possesses minute roughness, forming interfaces with droplets through wetting while altering contact angles. This behavior hinges on the roughness factor (Rf), defining the ratio of solid-liquid area to its projection, influencing contact angles. On rough surfaces, non-wetting liquids give rise to composite solid-liquid-air interfaces, their contact angles determined by the distribution of wet and air-pocket areas. The achievement of superliquiphobicity involves increasing the fractional flat geometrical area (fLA) and Rf, leading to surfaces that actively repel liquids.[93][94] The inspiration for crafting such surfaces draws from nature's ingenuity, illustrated by the "lotus effect". Leaves of water-repellent plants, like the lotus, exhibit inherent hierarchical structures featuring nanoscale wax-coated formations.[95][96]Other natural surfaces with these capabilities can include Beetle carapaces,[97]and cacti spines,[98]which may exhibit rough features at multiple size scales. These structures lead to superhydrophobicity, where water droplets perch on trapped air bubbles, resulting in high contact angles and minimal contact angle hysteresis. This natural example guides the development of superliquiphobic surfaces, capitalizing on re-entrant geometries that can repel low surface tension liquids and achieve near-zero contact angles.[99] Creating superliquiphobic surfaces involves pairing re-entrant geometries with low surface energy materials, such as fluorinated substances or liquid-like silocones.[98]These geometries include overhangs that widen beneath the surface, enabling repellency even for minimal contact angles. These surfaces find utility in self-cleaning, anti-icing, anti-fogging, antifouling, enhanced condensation,[98]and more, presenting innovative solutions to challenges in biomedicine, desalination, atmospheric water harvesting, and energy conversion. In essence, superliquiphobicity, inspired by natural models like the lotus leaf, capitalizes on re-entrant geometries and surface properties to create interfaces that actively repel liquids. These surfaces hold immense promise across a range of applications, promising enhanced functionality and performance in various technological and industrial contexts. Biomimetic materialsare gaining increasing attention in the field ofopticsandphotonics. There are still little knownbioinspired or biomimetic productsinvolving the photonic properties of plants or animals. However, understanding how nature designed such optical materials from biological resources is a current field of research. One source of biomimetic inspiration is fromplants. Plants have proven to be concept generations for the following functions; re(action)-coupling, self (adaptability), self-repair, and energy-autonomy. As plants do not have a centralized decision making unit (i.e. a brain), most plants have a decentralized autonomous system in various organs and tissues of the plant. Therefore, they react to multiple stimulus such as light, heat, and humidity.[100] One example is the carnivorous plant speciesDionaea muscipula(Venus flytrap). For the last 25 years, there has been research focus on the motion principles of the plant to develop AVFT (artificial Venus flytrap robots). Through the movement during prey capture, the plant inspired soft robotic motion systems. The fast snap buckling (within 100–300 ms) of the trap closure movement is initiated when prey triggers the hairs of the plant within a certain time (twice within 20 s). AVFT systems exist, in which the trap closure movements are actuated via magnetism, electricity, pressurized air, and temperature changes.[100] Another example of mimicking plants, is thePollia condensata,also known as the marble berry. The chiralself-assemblyof cellulose inspired by thePollia condensataberry has been exploited to make optically active films.[101][102]Such films are made from cellulose which is a biodegradable and biobased resource obtained from wood or cotton. The structural colours can potentially be everlasting and have more vibrant colour than the ones obtained from chemical absorption of light.Pollia condensatais not the only fruit showing a structural coloured skin; iridescence is also found in berries of other species such asMargaritaria nobilis.[103]These fruits showiridescentcolors in the blue-green region of the visible spectrum which gives the fruit a strong metallic and shiny visual appearance.[104]The structural colours come from the organisation of cellulose chains in the fruit'sepicarp, a part of the fruit skin.[104]Each cell of the epicarp is made of a multilayered envelope that behaves like aBragg reflector. However, the light which is reflected from the skin of these fruits is not polarised unlike the one arising from man-made replicates obtained from the self-assembly of cellulose nanocrystals into helicoids, which only reflect left-handedcircularly polarised light.[105] The fruit ofElaeocarpus angustifoliusalso show structural colour that come arises from the presence of specialised cells called iridosomes which have layered structures.[104]Similar iridosomes have also been found inDelarbreamichieanafruits.[104] In plants, multi layer structures can be found either at the surface of the leaves (on top of the epidermis), such as inSelaginella willdenowii[104]or within specialized intra-cellularorganelles, the so-called iridoplasts, which are located inside the cells of the upper epidermis.[104]For instance, the rain forest plants Begonia pavonina have iridoplasts located inside the epidermal cells.[104] Structural colours have also been found in several algae, such as in the red algaChondrus crispus(Irish Moss).[106] Structural colorationproduces the rainbow colours ofsoap bubbles, butterfly wings and many beetle scales.[107][108]Phase-separation has been used to fabricate ultra-whitescatteringmembranes frompolymethylmethacrylate, mimicking thebeetleCyphochilus.[109]LEDlights can be designed to mimic the patterns of scales onfireflies' abdomens, improving their efficiency.[110] Morphobutterfly wings are structurally coloured to produce a vibrant blue that does not vary with angle.[111]This effect can be mimicked by a variety of technologies.[112]Lotus Carsclaim to have developed a paint that mimics theMorphobutterfly's structural blue colour.[113]In 2007,Qualcommcommercialised aninterferometric modulator displaytechnology, "Mirasol", usingMorpho-like optical interference.[114]In 2010, the dressmaker Donna Sgro made a dress fromTeijin Fibers'Morphotex, an undyed fabric woven from structurally coloured fibres, mimicking the microstructure ofMorphobutterfly wing scales.[115][116][117][118][119] Canon Inc.'s SubWavelength structure Coating uses wedge-shaped structures the size of the wavelength of visible light. The wedge-shaped structures cause a continuously changing refractive index as light travels through the coating, significantly reducinglens flare. This imitates the structure of a moth's eye.[120][121]Notable figures such as the Wright Brothers and Leonardo da Vinci attempted to replicate the flight observed in birds.[122]In an effort to reduce aircraft noise researchers have looked to the leading edge of owl feathers, which have an array of small finlets orrachisadapted to disperse aerodynamic pressure and provide nearly silent flight to the bird.[123] Holistic planned grazing, using fencing and/orherders, seeks to restoregrasslandsby carefully planning movements of largeherdsof livestock to mimic the vast herds found in nature. The natural system being mimicked and used as a template isgrazinganimals concentrated by pack predators that must move on after eating, trampling, and manuring an area, and returning only after it has fully recovered. Its founderAllan Savoryand some others have claimed potential in building soil,[124]increasing biodiversity, and reversingdesertification.[125]However, many researchers have disputed Savory's claim. Studies have often found that the method increases desertification instead of reducing it.[126][127] Someair conditioningsystems use biomimicry in their fans to increaseairflowwhile reducing power consumption.[128][129] Technologists likeJas Johlhave speculated that the functionality of vacuole cells could be used to design highly adaptable security systems.[130]"The functionality of a vacuole, a biological structure that guards and promotes growth, illuminates the value of adaptability as a guiding principle for security." The functions and significance of vacuoles are fractal in nature, the organelle has no basic shape or size; its structure varies according to the requirements of the cell. Vacuoles not only isolate threats, contain what's necessary, export waste, maintain pressure—they also help the cell scale and grow. Johl argues these functions are necessary for any security system design.[130]The500 Series Shinkansenused biomimicry to reduce energy consumption and noise levels while increasing passenger comfort.[131]With reference to space travel, NASA and other firms have sought to develop swarm-type space drones inspired by bee behavioural patterns, and oxtapod terrestrial drones designed with reference to desert spiders.[132] Protein foldinghas been used to control material formation forself-assembled functional nanostructures.[133]Polar bear fur has inspired the design of thermal collectors and clothing.[134]The light refractive properties of the moth's eye has been studied to reduce the reflectivity of solar panels.[135] TheBombardier beetle's powerful repellent spray inspired a Swedish company to develop a "micro mist" spray technology, which is claimed to have a low carbon impact (compared to aerosol sprays). The beetle mixes chemicals and releases its spray via a steerable nozzle at the end of its abdomen, stinging and confusing the victim.[136] Mostviruseshave an outer capsule 20 to 300 nm in diameter. Virus capsules are remarkably robust and capable of withstanding temperatures as high as 60 °C; they are stable across thepHrange 2–10.[59]Viral capsules can be used to create nano device components such as nanowires, nanotubes, and quantum dots. Tubular virus particles such as thetobacco mosaic virus(TMV) can be used as templates to create nanofibers and nanotubes, since both the inner and outer layers of the virus are charged surfaces which can induce nucleation of crystal growth. This was demonstrated through the production ofplatinumandgoldnanotubes using TMV as a template.[137]Mineralized virus particles have been shown to withstand various pH values by mineralizing the viruses with different materials such as silicon,PbS, andCdSand could therefore serve as a useful carriers of material.[138]A spherical plant virus calledcowpea chlorotic mottle virus(CCMV) has interesting expanding properties when exposed to environments of pH higher than 6.5. Above this pH, 60 independent pores with diameters about 2 nm begin to exchange substance with the environment. The structural transition of the viral capsid can be utilized inBiomorphic mineralizationfor selective uptake and deposition of minerals by controlling the solution pH. Possible applications include using the viral cage to produce uniformly shaped and sized quantum dotsemiconductornanoparticles through a series of pH washes. This is an alternative to theapoferritincage technique currently used to synthesize uniform CdSe nanoparticles.[139]Such materials could also be used for targeted drug delivery since particles release contents upon exposure to specific pH levels.
https://en.wikipedia.org/wiki/Biomimicry
Digital architecturerefers to aspects of architecture that featuredigitaltechnologies or considers digital platforms as online spaces. The emerging field of digital architectures therefore applies to both classic architecture as well as the emerging study of social media technologies. Within classic architectural studies, the terminology is used to apply to digital skins that can be streamed images and have their appearance altered. A headquarters building design for Boston television and radio stationWGBHbyPolshek Partnershiphas been discussed as an example of digital architecture and includes a digital skin.[1] Within social media research, digital architecture refers to the technical protocols that enable, constrain, and shape user behavior in a virtual space.[2]Features of social media platforms such as how they facilitate user connections, enable functionality, and generate data are considered key properties that distinguish one digital architecture from another. Architecturecreated digitally might not involve the use of actual materials (brick, stone, glass, steel, wood).[3]It relies on "sets of numbers stored inelectromagneticformat" used to create representations and simulations that correspond to material performance and tomapout built artifacts.[3]It thus can involvedigital twinningfor planned construction or for maintenance management. Digital architecture does not just represent "ideated space"; it also creates places for human interaction that do not resemble physical architectural spaces.[3]Examples of these places in the "Internet Universe" andcyberspaceincludewebsites,multi-user dungeons,MOOs, andweb chatrooms.[3] Digital architecture allows complex calculations that delimit architects and allow a diverse range of complex forms to be created with great ease using computeralgorithms.[4]The new genre of "scripted, iterative, and indexical architecture" produces a proliferation of formal outcomes, leaving the designer the role of selection and increasing the possibilities in architectural design.[4]This has "re-initiated a debate regarding curvilinearity, expressionism and role of technology in society" leading to new forms of non-standard architecture by architects such asZaha Hadid,Kas OosterhuisandUN Studio.[4]A conference held in London in 2009 named "Digital Architecture London" introduced the latest development in digital design practice. TheFar Eastern International Digital Design Award(The Feidad Award) has been in existence since 2000 and honours "innovative design created with the aid of digital media." In 2005 a jury with members including a representative fromQuantum Film,Greg LynnfromGreg Lynn FORM, Jacob van Rijs ofMVRDV,Gerhard Schmitt,Birger Sevaldson(Ocean North), chose among submissions "exploring digital concepts such as computing, information, electronic media, hyper-, virtual-, and cyberspace in order to help define and discuss future space and architecture in the digital age."[5] The concept of digital architectures has a long history in Internet scholarship. Prior tosocial media, scholars focused on how the structure of an online space – such as a forum, website, or blog – shaped the formation of publics and political discourses.[6][7]With the rapid rise of social media, scholars have turned their attention to how the architectural design of social media platforms affects the behavior of influential users, such aspolitical campaigns.[8]This line of research differs from theaffordancesapproach,[9]which focuses on the relationships between users and technology, rather than the digital architecture of the platform.
https://en.wikipedia.org/wiki/Digital_architecture
Blobitecture(fromblob architecture),blobismandblobismusare terms for a movement inarchitecturein which buildings have an organic,amoeba-shaped building form.[1]Though the termblob architecturewas already in vogue in the mid-1990s, the wordblobitecturefirst appeared in print in 2002, inWilliam Safire's "On Language" column in theNew York Times Magazine.[2]Though intended in the Safire article to have a derogatory meaning, the word stuck and is often used to describe buildings with curved and rounded shapes. The term "blob" was used by the Czech-British architectJan Kaplickýfor the first time for the "Blob Office Building" in London in 1986. The building was characterized by an organic, aerodynamic shape and was touted for being energy-saving. 'Blob architecture' was coined by architectGreg Lynnin 1995 in his experiments in digital design withmetaballgraphical software.[3]Soon a range of architects and furniture designers began to experiment with this "blobby" software to create new and unusual forms. The word "blobitecture" itself is aportmanteauof the words "blob" and "architecture". Despite its seeming organicism, blob architecture is not possible withoutcomputer-aided designprograms. Architects derive the forms by manipulating algorithms on computer modeling platforms. Other computer-aided design functions used are thenonuniform rational B-splineor NURBS,freeform surfaces, and digitizing of sculpted forms similar tocomputed tomography.[4] One precedent isArchigram, a group of English architects working in the 1960s, to whichPeter Cookbelonged. They were interested in inflatable architecture as well as in the shapes that could be generated from plastic.Ron Herron, also a member of Archigram, created blob-like architecture in his projects from the 1960s, such asWalking CitiesandInstant City, as did Michael Webb withSin Centre.[5] Buckminster Fuller'swork withgeodesic domesprovided both stylistic and structural precedents. Geodesic domes form the building blocks forThe Eden Project.[6] Niemeyer'sEdificio Copanbuilt in 1957 undulates asymmetrically, invoking the irregular non-linearity often seen in blobitecture.[7]There was an air of psychedelia in the 1970s that these experimental architecture projects were a part of. The Flintstone Houseby William Nicholson in 1976 was built over large inflated balloons.Frederick Kiesler's unbuiltEndless Houseis another instance of early blob-like architecture, although it is symmetrical in plan and designed before computers; his design for theShrine of the Book(construction begun 1965) which has the characteristic droplet form of fluid also anticipates forms that interest architects today. Similarly, the work of Vittorio Giorgini (Casa Saldarini), Pascal Haüsermann, and especially that ofAntti Lovagare examples of successfully built blobs. The latter built the famousPalais Bulles[8]close to Cannes on the French Côte d'Azur, owned by fashion designerPierre Cardin. On the basis of form rather than technology, the organic designs ofAntoni Gaudiin Barcelona and of theExpressionistslikeBruno TautandHermann Finsterlinare considered to be blob architecture.[9]The emergence of new aesthetic-oriented architectural theories likeOOOhave led contemporary architects to explicitly examine the formal-technological-theoretical implications of blobitecture, including digital-physicalaugmented realityworks of architects like iheartblob.[10] The term, especially in popular parlance, has come to be associated with odd-looking buildings includingFrank Gehry'sGuggenheim Museum Bilbao(1997) and theExperience Music Project(2000).[11]These, in the narrower sense, are not blob buildings, even though they were designed by advanced computer-aided design tools,CATIAin particular.[12]The reason for this is that they were designed from physical models rather than from computer manipulations. The first full blob building was built in the Netherlands byLars Spuybroek(NOX) andKas Oosterhuis. Called the Water Pavilion (1993–1997), it has a fully computer-based shape manufactured with computer-aided design tools and an electronic interactive interior where sound and light can be transformed by the visitor. A building that also can be considered an example of a blob isPeter CookandColin Fournier'sKunsthaus(2003) inGraz, Austria. Other instances areRoy Mason'sXanadu House(1979), and a rare excursion into the field byHerzog & de Meuronin theirAllianz Arena(2005). By 2005, Norman Foster had involved himself in blobitecture to some extent as well with his brain-shaped design for thePhilological Libraryat theFree University of BerlinandThe Glasshouse, Gateshead. French-born architectEphraim Henry Pavie[fr]built the free-shaped Biomorphic House (2012) in Israel.[13]
https://en.wikipedia.org/wiki/Blobitecture
Generative artispost-conceptual artthat has been created (in whole or in part) with the use of anautonomoussystem. Anautonomous systemin this context is generally one that is non-human and can independently determine features of an artwork that would otherwise require decisions made directly by the artist. In some cases the human creator may claim that thegenerative systemrepresents their own artistic idea, and in others that the system takes on the role of the creator. "Generative art" often refers toalgorithmic art(algorithmicallydeterminedcomputer generated artwork) andsynthetic media(general term for any algorithmically generated media), but artists can also make generative art using systems ofchemistry,biology,mechanicsandrobotics,smart materials, manualrandomization,mathematics,data mapping,symmetry, andtiling. Generative algorithms, algorithms programmed to produce artistic works through predefined rules, stochastic methods, or procedural logic, often yielding dynamic, unique, and contextually adaptable outputs—are central to many of these practices. The use of the word "generative" in the discussion of art has developed over time. The use of "Artificial DNA" defines a generative approach to art focused on the construction of a system able to generate unpredictable events, all with a recognizable common character. The use ofautonomous systems, required by some contemporary definitions, focuses a generative approach where the controls are strongly reduced. This approach is also named "emergent".Margaret Bodenand Ernest Edmonds have noted the use of the term "generative art" in the broad context of automatedcomputer graphicsin the 1960s, beginning with artwork exhibited byGeorg NeesandFrieder Nakein 1965:[1]A. Michael Noll did his initial computer art, combining randomness with order, in 1962,[2]and exhibited it along with works by Bell Julesz in 1965.[3] The terms "generative art" and "computer art" have been used in tandem, and more or less interchangeably, since the very earliest days.[1] The first such exhibition showed the work of Nees in February 1965, which some claim was titled "Generative Computergrafik".[1]While Nees does not himself remember, this was the title of his doctoral thesis published a few years later.[4]The correct title of the first exhibition and catalog was "computer-grafik".[5]"Generative art" and related terms was in common use by several other early computer artists around this time, includingManfred Mohr[1]andKen Knowlton.Vera Molnár(born 1924) is a French media artist of Hungarian origin. Molnar is widely considered to be a pioneer of generative art, and is also one of the first women to use computers in her art practice. The term "Generative Art" with the meaning of dynamic artwork-systems able to generate multiple artwork-events was clearly used the first time for the "Generative Art" conference in Milan in 1998. The term has also been used to describe geometricabstract artwhere simple elements are repeated, transformed, or varied to generate more complex forms. Thus defined, generative art was practiced by the Argentinian artistsEduardo Mac Entyreand Miguel Ángel Vidal in the late 1960s. In 1972 the Romanian-bornPaul Neagucreated the Generative Art Group in Britain. It was populated exclusively by Neagu using aliases such as "Hunsy Belmood" and "Edward Larsocchi". In 1972 Neagu gave a lecture titled 'Generative Art Forms' at theQueen's University, BelfastFestival.[6][7] In 1970 theSchool of the Art Institute of Chicagocreated a department calledGenerative Systems. As described bySonia Landy Sheridanthe focus was on art practices using the then new technologies for the capture, inter-machine transfer, printing and transmission of images, as well as the exploration of the aspect of time in the transformation of image information. Also noteworthy isJohn Dunn,[8]first a student and then a collaborator of Sheridan.[9] In 1988 Clauser[10]identified the aspect of systemic autonomy as a critical element in generative art: It should be evident from the above description of the evolution of generative art that process (or structuring) and change (or transformation) are among its most definitive features, and that these features and the very term 'generative' imply dynamic development and motion. (the result) is not a creation by the artist but rather the product of the generative process - a self-precipitating structure. In 1989 Celestino Soddu defined the Generative Design approach to Architecture and Town Design in his bookCitta' Aleatorie.[11] In 1989 Franke referred to "generative mathematics" as "the study of mathematical operations suitable for generating artistic images."[12] From the mid-1990sBrian Enopopularized the termsgenerative musicand generative systems, making a connection with earlierexperimental musicbyTerry Riley,Steve ReichandPhilip Glass.[13] From the end of the 20th century, communities of generative artists, designers, musicians and theoreticians began to meet, forming cross-disciplinary perspectives. The first meeting about generative Art was in 1998, at the inaugural International Generative Art conference at Politecnico di Milano University, Italy.[14]In Australia, the Iterate conference on generative systems in the electronic arts followed in 1999.[15]On-line discussion has centered around the eu-gene mailing list,[16]which began late 1999, and has hosted much of the debate which has defined the field.[17]: 1These activities have more recently been joined by theGenerator.xconference in Berlin starting in 2005. In 2012 the new journal GASATHJ, Generative Art Science and Technology Hard Journal was founded by Celestino Soddu and Enrica Colabella[18]jointing several generative artists and scientists in the editorial board. Some have argued that as a result of this engagement across disciplinary boundaries, the community has converged on a shared meaning of the term. As Boden and Edmonds[1]put it in 2011: Today, the term "Generative Art" is still current within the relevant artistic community. Since 1998 a series of conferences have been held in Milan with that title (Generativeart.com), and Brian Eno has been influential in promoting and using generative art methods (Eno, 1996). Both in music and in visual art, the use of the term has now converged on work that has been produced by the activation of a set of rules and where the artist lets a computer system take over at least some of the decision-making (although, of course, the artist determines the rules). In the call of the Generative Art conferences in Milan (annually starting from 1998), the definition of Generative Art by Celestino Soddu: Generative Art is the idea realized as genetic code of artificial events, as construction of dynamic complex systems able to generate endless variations. Each Generative Project is a concept-software that works producing unique and non-repeatable events, like music or 3D Objects, as possible and manifold expressions of the generating idea strongly recognizable as a vision belonging to an artist / designer / musician / architect /mathematician.[19] Discussion on the eu-gene mailing list was framed by the following definition byAdrian Wardfrom 1999: Generative art is a term given to work which stems from concentrating on the processes involved in producing an artwork, usually (although not strictly) automated by the use of a machine or computer, or by using mathematic or pragmatic instructions to define the rules by which such artworks are executed.[20] A similar definition is provided by Philip Galanter:[17] Generative art refers to any art practice where the artist creates a process, such as a set of natural language rules, a computer program, a machine, or other procedural invention, which is then set into motion with some degree of autonomy contributing to or resulting in a completed work of art. Around the 2020s, generative AI models learned to imitate the distinct style of particular authors. For example, a generative image model such asStable Diffusionis able to model the stylistic characteristics of an artist likePablo Picasso(including his particular brush strokes, use of colour, perspective, and so on), and a user can engineer a prompt such as "an astronaut riding a horse, by Picasso" to cause the model to generate a novel image applying the artist's style to an arbitrary subject. Generative image models have received significant backlash from artists who object to their style being imitated without their permission, arguing that this harms their ability to profit from their own work.[21] Johann Kirnberger'sMusikalisches Würfelspiel("Musical Dice Game") of 1757 is considered an early example of a generative system based on randomness. Dice were used to select musical sequences from a numbered pool of previously composed phrases. This system provided a balance of order and disorder. The structure was based on an element of order on one hand, and disorder on the other.[22] ThefuguesofJ.S. Bachcould be considered generative, in that there is a strict underlying process that is followed by the composer.[23]Similarly,serialismfollows strict procedures which, in some cases, can be set up to generate entire compositions with limited human intervention.[24][25] Composers such asJohn Cage,[26]: 13–15Farmers Manual,[27]andBrian Eno[26]: 133have usedgenerative systemsin their works. The artistEllsworth Kellycreated paintings by using chance operations to assign colors in a grid. He also created works on paper that he then cut into strips or squares and reassembled using chance operations to determine placement.[28] Artists such asHans Haackehave explored processes of physical and social systems in artistic context.François Morellethas used both highly ordered and highly disordered systems in his artwork. Some of his paintings feature regular systems of radial or parallel lines to createMoiré Patterns. In other works he has used chance operations to determine the coloration of grids.[29][30]Sol LeWittcreated generative art in the form of systems expressed innatural languageand systems of geometricpermutation.Harold Cohen'sAARONsystem is a longstanding project combining software artificial intelligence with robotic painting devices to create physical artifacts.[31]Steina and Woody Vasulkaare video art pioneers who used analog video feedback to create generative art. Video feedback is now cited as an example of deterministic chaos, and the early explorations by the Vasulkas anticipated contemporary science by many years. Software systems exploitingevolutionary computingto create visual form include those created byScott DravesandKarl Sims. The digital artistJoseph Nechvatalhas exploited models of viral contagion.[32]AutopoiesisbyKen Rinaldoincludes fifteen musical androboticsculptures that interact with the public and modify their behaviors based on both the presence of the participants and each other.[26]: 144–145Jean-Pierre HebertandRoman Verostkoare founding members of theAlgorists, a group of artists who create their own algorithms to create art.A. Michael Noll, of Bell Telephone Laboratories, Incorporated, programmed computer art using mathematical equations and programmed randomness, starting in 1962.[33] The French artistJean-Max Albert, beside environmental sculptures likeIapetus,[34]andO=C=O,[35]developed a project dedicated to the vegetation itself, in terms of biological activity. TheCalmoduline Monumentproject is based on the property of a protein,calmodulin, to bond selectively to calcium. Exterior physical constraints (wind, rain, etc.) modify the electric potential of the cellular membranes of a plant and consequently the flux of calcium. However, the calcium controls the expression of the calmoduline gene.[36]The plant can thus, when there is a stimulus, modify its "typical" growth pattern. So the basic principle of this monumental sculpture is that to the extent that they could be picked up and transported, these signals could be enlarged, translated into colors and shapes, and show the plant's "decisions" suggesting a level of fundamental biological activity.[37] Maurizio Bologniniworks with generative machines to address conceptual and social concerns.[38]Mark Napieris a pioneer in data mapping, creating works based on the streams of zeros and ones in Ethernet traffic, as part of the "Carnivore" project.Martin Wattenbergpushed this theme further, transforming "data sets" as diverse as musical scores (in "Shape of Song", 2001) and Wikipedia edits (History Flow, 2003, withFernanda Viegas) into dramatic visual compositions. The Canadian artistSan Basedeveloped a "Dynamic Painting" algorithm in 2002. Using computer algorithms as "brush strokes", Base creates sophisticated imagery that evolves over time to produce a fluid, never-repeating artwork.[39] Since 1996 there have beenambigram generatorsthat auto generateambigrams.[40][41][42] Italian composerPietro Grossi, pioneer ofcomputer musicsince 1986, he extended his experiments to images, (same procedure used in his musical work) precisely to computer graphics, writing programs with specific auto-decisions, and developing the concept ofHomeArt, presented for the first time in the exhibitionNew Atlantis: the continent of electronic musicorganized by theVenice Biennalein 1986. Some contemporary artists who create generative visual artworks areJohn Maeda,Daniel Shiffman,Zachary Lieberman,Golan Levin,Casey Reas,Ben Fry, andGiles Whitaker (artist). For some artists, graphic user interfaces and computer code have become an independent art form in themselves.Adrian Wardcreated Auto-Illustrator as a commentary on software and generative methods applied to art and design.[citation needed] In 1987Celestino Sodducreated the artificial DNA of Italian Medieval towns able to generate endless3Dmodels of cities identifiable as belonging to the idea.[43] In 2010,Michael Hansmeyergenerated architectural columns in a project called "Subdivided Columns – A New Order (2010)". The piece explored how the simple process of repeated subdivision can create elaborate architectural patterns. Rather than designing any columns directly, Hansmeyer designed a process that produced columns automatically. The process could be run again and again with different parameters to create endless permutations. Endless permutations could be considered a hallmark of generative design.[44] Writers such asTristan Tzara,Brion Gysin, andWilliam Burroughsused thecut-up techniqueto introduce randomization to literature as a generative system.Jackson Mac Lowproduced computer-assisted poetry and used algorithms to generate texts;Philip M. Parkerhas written software to automatically generate entire books.Jason Nelsonused generative methods with speech-to-text software to create a series of digital poems from movies, television and other audio sources.[45] In the late 2010s, authors began to experiment withneural networkstrained on large language datasets.David Jhave Johnston'sReRitesis an early example of human-edited AI-generated poetry. Generative systems may be modified while they operate, for example by using interactive programming environments such asCsound,SuperCollider,FluxusandTidalCycles, including patching environments such asMax/MSP,Pure Dataandvvvv. This is a standard approach to programming by artists, but may also be used to create live music and/or video by manipulating generative systems on stage, a performance practice that has become known aslive coding. As with many examples ofsoftware art, because live coding emphasizes human authorship rather than autonomy, it may be considered in opposition to generative art.[46] In 2020, Erick "Snowfro" Calderon launched the Art Blocks platform[47]for combining the ideas of generative art and theblockchain, with resulting artworks created asNFTson theEthereumblockchain. One of the key innovations with the generative art created in this way is that all the source code and algorithm for creating the art has to be finalized and put on the blockchain permanently, without any ability to alter it further. Only when the artwork is sold ("minted"), the artwork is generated; the result is random yet should reflect the overall aesthetic defined by the artist. Calderon argues that this process forces the artist to be very thoughtful of the algorithm behind the art: Until today, a [generative] artist would create an algorithm, press the spacebar 100 times, pick five of the best ones and print them in high quality. Then they would frame them, and put them in a gallery.Maybe.Because Art Blocks forces the artist to accept every single output of the algorithm as their signed piece, the artist has to go back and tweak the algorithm until it's perfect. They can't just cherry pick the good outputs. That elevates the level of algorithmic execution because the artist is creating something that they know they're proud of before they even know what's going to come out on the other side.[48] In 2003, Philip Galanter published the most widely cited theory of generative art which describes generative art systems in the context of complexity theory.[17]In particular the notion ofMurray Gell-MannandSeth Lloyd'seffective complexityis cited. In this view both highly ordered and highly disordered generative art can be viewed as simple. Highly ordered generative art minimizesentropyand allows maximaldata compression, and highly disordered generative art maximizes entropy and disallows significant data compression. Maximally complex generative art blends order and disorder in a manner similar to biological life, and indeed biologically inspired methods are most frequently used to create complex generative art. This view is at odds with the earlierinformation theoryinfluenced views ofMax Bense[49]andAbraham Moles[50]where complexity in art increases with disorder. Galanter notes further that given the use of visual symmetry, pattern, and repetition by the most ancient known cultures generative art is as old as art itself. He also addresses the mistaken equivalence by some that rule-based art is synonymous with generative art. For example, some art is based on constraint rules that disallow the use of certain colors or shapes. Such art is not generative because constraint rules are not constructive, i.e. by themselves they do not assert what is to be done, only what cannot be done.[51] In their 2009 article,Margaret Bodenand Ernest Edmonds agree that generative art need not be restricted to that done using computers, and that some rule-based art is not generative. They develop a technical vocabulary that includes Ele-art (electronic art), C-art (computer art), D-art (digital art), CA-art (computer assisted art), G-art (generative art), CG-art (computer based generative art), Evo-art (evolutionary based art), R-art (robotic art), I-art (interactive art), CI-art (computer based interactive art), and VR-art (virtual reality art).[1] The discourse around generative art can be characterized by the theoretical questions which motivate its development. McCormack et al. propose the following questions, shown with paraphrased summaries, as the most important:[52] Another question is of postmodernism—are generative art systems the ultimate expression of the postmodern condition, or do they point to a new synthesis based on a complexity-inspired world-view?[53]
https://en.wikipedia.org/wiki/Generative_art
Evolutionary artis a branch ofgenerative art, in which the artist does not do the work of constructing the artwork, but rather lets a system do the construction. In evolutionary art, initially generated art is put through an iterated process of selection and modification to arrive at a final product, where it is the artist who is the selective agent. Evolutionary art is to be distinguished fromBioArt, which uses living organisms as the material medium instead of paint, stone, metal, etc. In common with biologicalevolutionthroughnatural selectionoranimal husbandry, the members of a population undergoing artificial evolution modify their form or behavior over many reproductive generations in response to a selective regime. Ininteractive evolutionthe selective regime may be applied by the viewer explicitly by selecting individuals which are aesthetically pleasing. Alternatively aselection pressurecan be generated implicitly, for example according to the length of time a viewer spends near a piece of evolving art. Equally, evolution may be employed as a mechanism for generating a dynamic world of adaptive individuals, in which theselection pressureis imposed by the program, and the viewer plays no role in selection, as in theBlack Shoalsproject.
https://en.wikipedia.org/wiki/Evolutionary_art
Elmo(stylized aselmo, ablendofelasticandmonkey) is acomputer shogievaluation function andbookfile (joseki) created by Makoto Takizawa (瀧澤誠). It is designed to be used with a third-party shogi alpha–beta search engine. Combined with theyaneura ou(やねうら王) search, Elmo became the champion of the 27th annual World Computer Shogi Championship (世界コンピュータ将棋選手権) in May 2017.[1][2]However, in the Den Ō tournament (将棋電王戦) in November 2017, Elmo was not able to make it to the top five engines losing to平成将棋合戦ぽんぽこ(1st), shotgun (2nd), ponanza (3rd),読み太(4th), and Qhapaq_conflated (5th).[3]It won the World Championship again in 2021. In October 2017,DeepMindclaimed that its programAlphaZero, after two hours of massively parallel training (700,000 steps or 10,300,000 games), began to exceed Elmo's performance. With a full nine hours of training (24 million games), AlphaZero defeated Elmo in a 100-game match, winning 90, losing 8, and drawing two.[4][5] Elmo is free software that may be run on shogi engine interface GUIs such asShogidokoroand ShogiGUI.[6][7][8] A newcastlehas appeared in computer games featuring elmo, which has been namedelmo castle(エルモ囲いerumogakoi). Subsequently, the castle has been used byprofessional shogi playersand recently featured in a book on a new Anti–Ranging RookRapid Attack strategy.[9] This shogi-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Elmo_(shogi_engine)
Stockfishis afree and open-sourcechess engine, available for various desktop and mobile platforms. It can be used inchess softwarethrough theUniversal Chess Interface. Stockfish has been one of the strongest chess engines in the world for several years;[3][4][5]it has won all main events of theTop Chess Engine Championship(TCEC) and theChess.com Computer Chess Championship(CCC) since 2020 and, as of May 2025[update], is the strongest CPU chess engine in the world with an estimatedElorating of 3644, in a time control of 40/15 (15 minutes to make 40 moves), according to CCRL.[6] The Stockfish engine was developed by Tord Romstad, Marco Costalba, and Joona Kiiski, and was derived from Glaurung, an open-source engine by Tord Romstad released in 2004. It is now being developed and maintained by the Stockfish community.[7] Stockfish historically used only a classicalhand-crafted functionto evaluate board positions, but with the introduction of theefficiently updatable neural network(NNUE) in August 2020, it adopted a hybrid evaluation system that primarily used the neural network and occasionally relied on the hand-crafted evaluation.[8][9][10]In July 2023, Stockfish removed the hand-crafted evaluation and transitioned to a fully neural network-based approach.[11][12] Stockfish uses a tree-search algorithm based onalpha–beta searchwith several hand-designed heuristics, and since Stockfish 12 (2020) uses an efficiently updatable neural network as its evaluation function. It represents positions usingbitboards.[13] Stockfish supportsChess960, a feature it inherited from Glaurung.[14]Support forSyzygytablebases, previously available in aforkmaintained by Ronald de Man, was integrated into Stockfish in 2014.[15]In 2018 support for the 7-man Syzygy was added, shortly after the tablebase was made available. Stockfish supports an unlimited number ofCPU threadsinmultiprocessorsystems, with a maximumtransposition tablesize of 32 TB.[16] Stockfish has been a very popular engine on various platforms. On desktop, it is the default chess engine bundled with theInternet Chess Clubinterface programs BlitzIn and Dasher. On mobile, it has been bundled with the Stockfish app, SmallFish and Droidfish. Other Stockfish-compatiblegraphical user interfaces(GUIs) includeFritz, Arena, Stockfish for Mac, andPyChess.[17][18]Stockfish can be compiled toWebAssemblyorJavaScript, allowing it to run in the browser. BothChess.comandLichessprovide Stockfish in this form in addition to a server-side program.[19]Release versions and development versions are available asC++source codeand as precompiled versions forMicrosoft Windows,macOS,Linux32-bit/64-bit andAndroid. The program originated fromGlaurung, an open-source chess engine created by Tord Romstad and first released in 2004. Four years later, Marco Costalba forked the project, naming itStockfishbecause it was "produced in Norway and cooked in Italy" (Romstad is Norwegian and Costalba is Italian). The first version, Stockfish 1.0, was released in November 2008.[20][21]For a while, new ideas and code changes were transferred between the two programs in both directions, until Romstad decided to discontinue Glaurung in favor of Stockfish, which was the stronger engine at the time.[22]The last Glaurung version (2.2) was released in December 2008. Around 2011, Romstad decided to abandon his involvement with Stockfish in order to spend more time on his new iOS chess app.[23]On 18 June 2014 Marco Costalba announced that he had "decided to step down as Stockfish maintainer" and asked that the community create a fork of the current version and continue its development.[24]An official repository, managed by a volunteer group of core Stockfish developers, was created soon after and currently manages the development of the project.[25] Since 2013, Stockfish has been developed using adistributedtesting framework namedFishtest, where volunteers can donate CPU time for testing improvements to the program.[26][27][28] Changes to game-playing code are accepted or rejected based on results of playing of tens of thousands of games on the framework against an older "reference" version of the program, usingsequential probability ratio testing. Tests on the framework are verified using thechi-squared test, and only if the results are statistically significant are they deemed reliable and used to revise the software code. After the inception of Fishtest, Stockfish experienced an explosive growth of 120Elo pointsin just 12 months, propelling it to the top of all major rating lists.[29] As of May 2025[update], the framework has used a total of more than 18,000 years of CPU time to play over 9.1 billion chess games.[30] In June 2020, Stockfish introduced theefficiently updatable neural network(NNUE) approach, based on earlier work bycomputer shogiprogrammers.[31][32]Instead of using manually designed heuristics to evaluate the board, this approach introduced a neural network trained on millions of positions which could be evaluated quickly on CPU. On 2 September 2020, the twelfth version of Stockfish was released, incorporating NNUE, and reportedly winning ten times more game pairs than it loses when matched against version eleven.[33][34]In July 2023, the classical evaluation was completely removed in favor of the NNUE evaluation.[35] Stockfish is aTCECmultiple-time champion and the current leader in trophy count. Ever since TCEC restarted in 2013, Stockfish has finished first or second in every season except one. Stockfish finished second in TCEC Season 4 and 5, with scores of 23–25 first againstHoudini3 and later againstKomodo1142 in the Superfinal event. Season 5 was notable for the winning Komodo team as they accepted the award posthumously for the program's creatorDon Dailey, who succumbed to an illness during the final stage of the event. In his honor, the version of Stockfish that was released shortly after that season was named "Stockfish DD".[36] On 30 May 2014, Stockfish 170514 (a development version of Stockfish 5 with tablebase support) convincingly won TCEC Season 6, scoring 35.5–28.5 against Komodo 7x in the Superfinal.[37]Stockfish 5 was released the following day.[38]In TCEC Season 7, Stockfish again made the Superfinal, but lost to Komodo with a score of 30.5–33.5.[37]In TCEC Season 8, despite losses on time caused by buggy code, Stockfish nevertheless qualified once more for the Superfinal, but lost 46.5–53.5 to Komodo.[37]In Season 9, Stockfish defeated Houdini 5 with a score of 54.5–45.5.[37][39] Stockfish finished third during season 10 of TCEC, the only season since 2013 in which Stockfish had failed to qualify for the superfinal. It did not lose a game but was still eliminated because it was unable to score enough wins against lower-rated engines. After this technical elimination, Stockfish went on a long winning streak, winning seasons 11 (59–41 against Houdini 6.03),[37][40]12 (60–40 against Komodo 12.1.1),[37][41]and 13 (55–45 against Komodo 2155.00)[37][42]convincingly.[43]InSeason 14, Stockfish faced a new challenger inLeela Chess Zero, ekeing out a win by one point (50.5–49.5).[37][44]Its winning streak was finally ended inSeason 15, when Leela qualified again and won 53.5–46.5,[37]but Stockfish promptly wonSeason 16, defeating AllieStein 54.5–45.5, after Leela failed to qualify for the Superfinal.[37]InSeason 17, Stockfish faced Leela again in the superfinal, losing 52.5–47.5. However, Stockfish has won every Superfinal since: beating Leela 53.5–46.5 inSeason 18, 54.5–45.5 inSeason 19, 53–47 inSeason 20, and 56–44 inSeason 21.[37]In Season 22, Komodo Dragon beat out Leela to qualify for the Superfinal, losing to Stockfish by a large margin 59.5-40.5. Stockfish did not lose an opening pair in this match.[45]Leela made the Superfinal in Seasons 23 and 24, but was crushed by Stockfish both times (58.5-41.5 and 58-42).[46][47]In Season 25, Stockfish once again defeated Leela, but this time by a narrower margin of 52-48.[48] Stockfish also took part in the TCEC cup, winning the first edition, but was surprisingly upset by Houdini in the semifinals of the second edition.[37][49]Stockfish recovered to beat Komodo in the third-place playoff.[37]In the third edition, Stockfish made it to the finals, but was defeated byLeela Chess Zeroafter blundering in a 7-manendgame tablebasedraw. It turned this result around in the fourth edition, defeating Leela in the final 4.5–3.5.[37]In TCEC Cup 6, Stockfish finished third after losing to AllieStein in the semifinals, the first time it had failed to make the finals. Since then, Stockfish has consistently won the tournament, with the exception of the 11th edition which Leela won 8.5-7.5. Ever sinceChess.comhosted its firstChess.com Computer Chess Championshipin 2018, Stockfish has been the most successful engine. It dominated the earlier championships, winning six consecutive titles before finishing second in CCC7. Since then, its dominance has come under threat from the neural-network engines Leelenstein andLeela Chess Zero, but it has continued to perform well, reaching at least the superfinal in every edition up to CCC11. CCC12 had for the first time a knockout format, with seeding placing CCC11 finalists Stockfish and Leela in the same half. Leela eliminated Stockfish in the semi-finals. However, a post-tournament match against the loser of the final, Leelenstein, saw Stockfish winning in the same format as the main event. After finishing second again to Leela in CCC13, and an uncharacteristic fourth in CCC14, Stockfish went on a long winning streak, taking first place in every championship since. Stockfish's strength relative to the best human chess players was most apparent in a handicap match with grandmasterHikaru Nakamura(2798-rated) in August 2014. In the first two games of the match, Nakamura had the assistance of an older version ofRybka, and in the next two games, he received White with pawnoddsbut no assistance. Nakamura was the world's fifth highest rated human chess player at the time of the match, while Stockfish 5 was denied use of its opening book and endgame tablebase. Stockfish won each half of the match 1.5–0.5. Both of Stockfish's wins arose from positions in which Nakamura, as is typical for his playing style, pressed for a win instead of acquiescing to a draw.[187] In December 2017, Stockfish 8 was used as a benchmark to testGoogledivisionDeepMind'sAlphaZero, with Stockfish running on CPU and AlphaZero running on Google's proprietaryTensor Processing Units. AlphaZero was trained through self-play for a total of nine hours, and reached Stockfish's level after just four.[188][189][190]In 100 games from the starting position, AlphaZero won 25 games as White, won 3 as Black, and drew the remaining 72, with 0 losses.[191]AlphaZero also played twelve 100-game matches against Stockfish starting from twelve popular openings for a final score of 290 wins, 886 draws and 24 losses, for a point score of 733:467.[192][note 2] AlphaZero's victory over Stockfish sparked a flurry of activity in the computer chess community, leading to a new open-source engine aimed at replicating AlphaZero, known asLeela Chess Zero. By January 2019, Leela was able to defeat the version of Stockfish that played AlphaZero (Stockfish 8) in a 100-game match. An updated version of Stockfish narrowly defeated Leela Chess Zero in the superfinal of the14th TCEC season, 50.5–49.5 (+10 =81 −9),[37]but lost the Superfinal of thenext seasonto Leela 53.5–46.5 (+14 =79 -7).[37][194]The two engines remained close in strength for a while, but Stockfish has pulled away since the introduction of NNUE, winning every TCEC season since Season 18.
https://en.wikipedia.org/wiki/Stockfish_chess_engine
Chess softwarecomes in different forms. A chess playing program provides a graphical chessboard on which one can play a chess game against a computer. Such programs are available forpersonal computers,video game consoles,smartphones/tablet computersormainframes/supercomputers. Achess enginegenerates moves, but is accessed via acommand-line interfacewith no graphics. A dedicated chess computer has been purpose built solely to play chess. Agraphical user interface(GUI) allows one to import and load an engine, and play against it. A chess database allows one to import, edit, and analyze a large archive of past games. This list contains only chess engines for which Wikipedia articles exist yet and therefore is very incomplete. It does not reflect or imply current or historicplay strengthas this characteristic in itself usually does not warrant an entry on Wikipedia. The following are special-purpose hardware/software combinations that are inextricably connected:
https://en.wikipedia.org/wiki/List_of_chess_software
This is a list ofgenetic algorithm(GA) applications.
https://en.wikipedia.org/wiki/List_of_genetic_algorithm_applications
Particle filters, also known assequential Monte Carlomethods, are a set ofMonte Carloalgorithms used to find approximate solutions forfiltering problemsfor nonlinear state-space systems, such assignal processingandBayesian statistical inference.[1]Thefiltering problemconsists of estimating the internal states indynamical systemswhen partial observations are made and random perturbations are present in the sensors as well as in the dynamical system. The objective is to compute theposterior distributionsof the states of aMarkov process, given the noisy and partial observations. The term "particle filters" was first coined in 1996 by Pierre Del Moral aboutmean-field interacting particle methodsused influid mechanicssince the beginning of the 1960s.[2]The term "Sequential Monte Carlo" was coined byJun S. Liuand Rong Chen in 1998.[3] Particle filtering uses a set of particles (also called samples) to represent theposterior distributionof astochastic processgiven the noisy and/or partial observations. The state-space model can be nonlinear and the initial state and noise distributions can take any form required. Particle filter techniques provide a well-established methodology[2][4][5]for generating samples from the required distribution without requiring assumptions about the state-space model or the state distributions. However, these methods do not perform well when applied to very high-dimensional systems. Particle filters update their prediction in an approximate (statistical) manner. The samples from the distribution are represented by a set of particles; each particle has a likelihood weight assigned to it that represents theprobabilityof that particle being sampled from theprobability density function. Weight disparity leading to weight collapse is a common issue encountered in these filtering algorithms. However, it can be mitigated by including a resampling step before the weights become uneven. Several adaptive resampling criteria can be used including thevarianceof the weights and the relativeentropyconcerning the uniform distribution.[6]In the resampling step, the particles with negligible weights are replaced by new particles in the proximity of the particles with higher weights. From the statistical and probabilistic point of view, particle filters may be interpreted asmean-field particleinterpretations ofFeynman-Kacprobability measures.[7][8][9][10][11]These particle integration techniques were developed inmolecular chemistryandcomputational physicsbyTheodore E. HarrisandHerman Kahnin 1951,Marshall N. RosenbluthandArianna W. Rosenbluthin 1955,[12]and more recently by Jack H. Hetherington in 1984.[13]In computational physics, these Feynman-Kac type path particle integration methods are also used inQuantum Monte Carlo, and more specificallyDiffusion Monte Carlo methods.[14][15][16]Feynman-Kac interacting particle methods are also strongly related tomutation-selection genetic algorithmscurrently used inevolutionary computationto solve complex optimization problems. The particle filter methodology is used to solveHidden Markov Model(HMM) andnonlinear filteringproblems. With the notable exception of linear-Gaussian signal-observation models (Kalman filter) or wider classes of models (Benes filter[17]), Mireille Chaleyat-Maurel and Dominique Michel proved in 1984 that the sequence of posterior distributions of the random states of a signal, given the observations (a.k.a. optimal filter), has no finite recursion.[18]Various other numerical methods based on fixed grid approximations,Markov Chain Monte Carlotechniques, conventional linearization,extended Kalman filters, or determining the best linear system (in the expected cost-error sense) are unable to cope with large-scale systems, unstable processes, or insufficiently smooth nonlinearities. Particle filters and Feynman-Kac particle methodologies find application insignal and image processing,Bayesian inference,machine learning,risk analysis and rare event sampling,engineeringand robotics,artificial intelligence,bioinformatics,[19]phylogenetics,computational science,economicsandmathematical finance,molecular chemistry,computational physics,pharmacokinetics, quantitative risk and insurance[20][21]and other fields. From a statistical and probabilistic viewpoint, particle filters belong to the class ofbranching/genetic type algorithms, andmean-field type interacting particle methodologies.The interpretation of these particle methods depends on the scientific discipline. InEvolutionary Computing,mean-field genetic type particlemethodologies are often used as heuristic and natural search algorithms (a.k.a.Metaheuristic). Incomputational physicsandmolecular chemistry, they are used to solve Feynman-Kacpath integrationproblems or to compute Boltzmann-Gibbs measures, top eigenvalues, andground statesofSchrödingeroperators. InBiologyandGenetics, they represent the evolution of a population of individuals or genes in some environment. The origins of mean-field typeevolutionary computational techniquescan be traced back to 1950 and 1954 withAlan Turing'swork on genetic type mutation-selection learning machines[22]and the articles byNils Aall Barricelliat theInstitute for Advanced StudyinPrinceton, New Jersey.[23][24]The first trace of particle filters instatistical methodologydates back to the mid-1950s; the 'Poor Man's Monte Carlo',[25]that was proposed byJohn Hammersleyet al., in 1954, contained hints of the genetic type particle filtering methods used today. In 1963,Nils Aall Barricellisimulated a genetic type algorithm to mimic the ability of individuals to play a simple game.[26]Inevolutionary computingliterature, genetic-type mutation-selection algorithms became popular through the seminal work ofJohn Hollandin the early 1970s, particularly his book[27]published in 1975. In Biology andGenetics, the Australian geneticistAlex Fraseralso published in 1957 a series of papers on the genetic type simulation ofartificial selectionof organisms.[28]The computer simulation of the evolution by biologists became more common in the early 1960s, and the methods were described in books by Fraser and Burnell (1970)[29]and Crosby (1973).[30]Fraser's simulations included all of the essential elements of modern mutation-selection genetic particle algorithms. From the mathematical viewpoint, theconditional distributionof the random states of a signal given some partial and noisy observations is described by a Feynman-Kac probability on the random trajectories of the signal weighted by a sequence of likelihood potential functions.[7][8]Quantum Monte Carlo, and more specificallyDiffusion Monte Carlo methodscan also be interpreted as a mean-field genetic type particle approximation of Feynman-Kac path integrals.[7][8][9][13][14][31][32]The origins ofQuantum Monte Carlomethods are often attributed toEnrico FermiandRobert Richtmyerwho developed in 1948 a mean-field particle interpretation ofneutron chain reactions,[33]but the first heuristic-like and genetic type particle algorithm (a.k.a. Resampled or Reconfiguration Monte Carlo methods) for estimating ground state energies of quantum systems (in reduced matrix models) is due to Jack H. Hetherington in 1984.[13]One can also quote the earlier seminal works ofTheodore E. HarrisandHerman Kahnin particle physics, published in 1951, using mean-field but heuristic-like genetic methods for estimating particle transmission energies.[34]In molecular chemistry, the use of genetic heuristic-like particle methodologies (a.k.a. pruning and enrichment strategies) can be traced back to 1955 with the seminal work ofMarshall N. RosenbluthandArianna W. Rosenbluth.[12] The use ofgenetic particle algorithmsin advancedsignal processingandBayesian inferenceis more recent. In January 1993, Genshiro Kitagawa developed a "Monte Carlo filter",[35]a slightly modified version of this article appeared in 1996.[36]In April 1993, Neil J. Gordon et al., published in their seminal work[37]an application of genetic type algorithm in Bayesian statistical inference. The authors named their algorithm 'the bootstrap filter', and demonstrated that compared to other filtering methods, their bootstrap algorithm does not require any assumption about that state space or the noise of the system. Independently, the ones by Pierre Del Moral[2]and Himilcon Carvalho, Pierre Del Moral,André Monin, and Gérard Salut[38]on particle filters published in the mid-1990s. Particle filters were also developed in signal processing in early 1989-1992 by P. Del Moral, J.C. Noyer, G. Rigal, and G. Salut in theLAAS-CNRSin a series of restricted and classified research reports with STCAN (Service Technique des Constructions et Armes Navales), the IT company DIGILOG, and theLAAS-CNRS(the Laboratory for Analysis and Architecture of Systems) on RADAR/SONAR and GPS signal processing problems.[39][40][41][42][43][44] From 1950 to 1996, all the publications on particle filters, and genetic algorithms, including the pruning and resample Monte Carlo methods introduced in computational physics and molecular chemistry, present natural and heuristic-like algorithms applied to different situations without a single proof of their consistency, nor a discussion on the bias of the estimates and genealogical and ancestral tree-based algorithms. The mathematical foundations and the first rigorous analysis of these particle algorithms are due to Pierre Del Moral[2][4]in 1996. The article[2]also contains proof of the unbiased properties of a particle approximation of likelihood functions and unnormalizedconditional probabilitymeasures. The unbiased particle estimator of the likelihood functions presented in this article is used today in Bayesian statistical inference. Dan Crisan, Jessica Gaines, andTerry Lyons,[45][46][47]as well as Pierre Del Moral, and Terry Lyons,[48]created branching-type particle techniques with various population sizes around the end of the 1990s. P. Del Moral, A. Guionnet, and L. Miclo[8][49][50]made more advances in this subject in 2000. Pierre Del Moral and Alice Guionnet[51]proved the first central limit theorems in 1999, and Pierre Del Moral and Laurent Miclo[8]proved them in 2000. The first uniform convergence results concerning the time parameter for particle filters were developed at the end of the 1990s by Pierre Del Moral andAlice Guionnet.[49][50]The first rigorous analysis of genealogical tree-ased particle filter smoothers is due to P. Del Moral and L. Miclo in 2001[52] The theory on Feynman-Kac particle methodologies and related particle filter algorithms was developed in 2000 and 2004 in the books.[8][5]These abstract probabilistic models encapsulate genetic type algorithms, particle, and bootstrap filters, interacting Kalman filters (a.k.a. Rao–Blackwellized particle filter[53]), importance sampling and resampling style particle filter techniques, including genealogical tree-based and particle backward methodologies for solving filtering and smoothing problems. Other classes of particle filtering methodologies include genealogical tree-based models,[10][5][54]backward Markov particle models,[10][55]adaptive mean-field particle models,[6]island-type particle models,[56][57]particle Markov chain Monte Carlo methodologies,[58][59]Sequential Monte Carlo samplers[60][61][62]and Sequential Monte Carlo Approximate Bayesian Computation methods[63]and Sequential Monte Carlo ABC based Bayesian Bootstrap.[64] A particle filter's goal is to estimate the posterior density of state variables given observation variables. The particle filter is intended for use with ahidden Markov Model, in which the system includes both hidden and observable variables. The observable variables (observation process) are linked to the hidden variables (state-process) via a known functional form. Similarly, the probabilistic description of the dynamical system defining the evolution of the state variables is known. A generic particle filter estimates the posterior distribution of the hidden states using the observation measurement process. With respect to a state-space such as the one below: the filtering problem is to estimatesequentiallythe values of the hidden statesXk{\displaystyle X_{k}}, given the values of the observation processY0,⋯,Yk,{\displaystyle Y_{0},\cdots ,Y_{k},}at any time stepk. All Bayesian estimates ofXk{\displaystyle X_{k}}follow from theposterior densityp(xk|y0,y1,...,yk){\displaystyle p(x_{k}|y_{0},y_{1},...,y_{k})}. The particle filter methodology provides an approximation of these conditional probabilities using the empirical measure associated with a genetic type particle algorithm. In contrast, the Markov Chain Monte Carlo orimportance samplingapproach would model the full posteriorp(x0,x1,...,xk|y0,y1,...,yk){\displaystyle p(x_{0},x_{1},...,x_{k}|y_{0},y_{1},...,y_{k})}. Particle methods often assumeXk{\displaystyle X_{k}}and the observationsYk{\displaystyle Y_{k}}can be modeled in this form: An example of system with these properties is: where bothWk{\displaystyle W_{k}}andVk{\displaystyle V_{k}}are mutually independent sequences with knownprobability density functionsandgandhare known functions. These two equations can be viewed asstate spaceequations and look similar to the state space equations for the Kalman filter. If the functionsgandhin the above example are linear, and if bothWk{\displaystyle W_{k}}andVk{\displaystyle V_{k}}areGaussian, the Kalman filter finds the exact Bayesian filtering distribution. If not, Kalman filter-based methods are a first-order approximation (EKF) or a second-order approximation (UKFin general, but if the probability distribution is Gaussian a third-order approximation is possible). The assumption that the initial distribution and the transitions of the Markov chain are continuous for theLebesgue measurecan be relaxed. To design a particle filter we simply need to assume that we can sample the transitionsXk−1→Xk{\displaystyle X_{k-1}\to X_{k}}of the Markov chainXk,{\displaystyle X_{k},}and to compute the likelihood functionxk↦p(yk|xk){\displaystyle x_{k}\mapsto p(y_{k}|x_{k})}(see for instance the genetic selection mutation description of the particle filter given below). The continuous assumption on the Markov transitions ofXk{\displaystyle X_{k}}is only used to derive in an informal (and rather abusive) way different formulae between posterior distributions using the Bayes' rule for conditional densities. In certain problems, the conditional distribution of observations, given the random states of the signal, may fail to have a density; the latter may be impossible or too complex to compute.[19]In this situation, an additional level of approximation is necessitated. One strategy is to replace the signalXk{\displaystyle X_{k}}by the Markov chainXk=(Xk,Yk){\displaystyle {\mathcal {X}}_{k}=\left(X_{k},Y_{k}\right)}and to introduce a virtual observation of the form for some sequence of independent random variablesVk{\displaystyle {\mathcal {V}}_{k}}with knownprobability density functions. The central idea is to observe that The particle filter associated with the Markov processXk=(Xk,Yk){\displaystyle {\mathcal {X}}_{k}=\left(X_{k},Y_{k}\right)}given the partial observationsY0=y0,⋯,Yk=yk,{\displaystyle {\mathcal {Y}}_{0}=y_{0},\cdots ,{\mathcal {Y}}_{k}=y_{k},}is defined in terms of particles evolving inRdx+dy{\displaystyle \mathbb {R} ^{d_{x}+d_{y}}}with a likelihood function given with some obvious abusive notation byp(Yk|Xk){\displaystyle p({\mathcal {Y}}_{k}|{\mathcal {X}}_{k})}. These probabilistic techniques are closely related toApproximate Bayesian Computation(ABC). In the context of particle filters, these ABC particle filtering techniques were introduced in 1998 by P. Del Moral, J. Jacod and P. Protter.[65]They were further developed by P. Del Moral, A. Doucet and A. Jasra.[66][67] Bayes' rulefor conditional probability gives: where Particle filters are also an approximation, but with enough particles they can be much more accurate.[2][4][5][49][50]The nonlinear filtering equation is given by the recursion p(xk|y0,⋯,yk−1)⟶updatingp(xk|y0,⋯,yk)=p(yk|xk)p(xk|y0,⋯,yk−1)∫p(yk|xk′)p(xk′|y0,⋯,yk−1)dxk′⟶predictionp(xk+1|y0,⋯,yk)=∫p(xk+1|xk)p(xk|y0,⋯,yk)dxk{\displaystyle {\begin{aligned}p(x_{k}|y_{0},\cdots ,y_{k-1})&{\stackrel {\text{updating}}{\longrightarrow }}p(x_{k}|y_{0},\cdots ,y_{k})={\frac {p(y_{k}|x_{k})p(x_{k}|y_{0},\cdots ,y_{k-1})}{\int p(y_{k}|x'_{k})p(x'_{k}|y_{0},\cdots ,y_{k-1})dx'_{k}}}\\&{\stackrel {\text{prediction}}{\longrightarrow }}p(x_{k+1}|y_{0},\cdots ,y_{k})=\int p(x_{k+1}|x_{k})p(x_{k}|y_{0},\cdots ,y_{k})dx_{k}\end{aligned}}} with the conventionp(x0|y0,⋯,yk−1)=p(x0){\displaystyle p(x_{0}|y_{0},\cdots ,y_{k-1})=p(x_{0})}fork= 0. The nonlinear filtering problem consists in computing these conditional distributions sequentially. We fix a time horizon n and a sequence of observationsY0=y0,⋯,Yn=yn{\displaystyle Y_{0}=y_{0},\cdots ,Y_{n}=y_{n}}, and for eachk= 0, ...,nwe set: In this notation, for any bounded functionFon the set of trajectories ofXk{\displaystyle X_{k}}from the origink= 0 up to timek=n, we have the Feynman-Kac formula Feynman-Kac path integration models arise in a variety of scientific disciplines, including in computational physics, biology, information theory and computer sciences.[8][10][5]Their interpretations are dependent on the application domain. For instance, if we choose the indicator functionGn(xn)=1A(xn){\displaystyle G_{n}(x_{n})=1_{A}(x_{n})}of some subset of the state space, they represent the conditional distribution of a Markov chain given it stays in a given tube; that is, we have: and as soon as the normalizing constant is strictly positive. Initially, such an algorithm starts withNindependent random variables(ξ0i)1⩽i⩽N{\displaystyle \left(\xi _{0}^{i}\right)_{1\leqslant i\leqslant N}}with common probability densityp(x0){\displaystyle p(x_{0})}. The genetic algorithm selection-mutation transitions[2][4] mimic/approximate the updating-prediction transitions of the optimal filter evolution (Eq. 1): whereδa{\displaystyle \delta _{a}}stands for theDirac measureat a given state a. In the above displayed formulaep(yk|ξki){\displaystyle p(y_{k}|\xi _{k}^{i})}stands for the likelihood functionxk↦p(yk|xk){\displaystyle x_{k}\mapsto p(y_{k}|x_{k})}evaluated atxk=ξki{\displaystyle x_{k}=\xi _{k}^{i}}, andp(xk+1|ξ^ki){\displaystyle p(x_{k+1}|{\widehat {\xi }}_{k}^{i})}stands for the conditional densityp(xk+1|xk){\displaystyle p(x_{k+1}|x_{k})}evaluated atxk=ξ^ki{\displaystyle x_{k}={\widehat {\xi }}_{k}^{i}}. At each timek, we have the particle approximations and In Genetic algorithms andEvolutionary computingcommunity, the mutation-selection Markov chain described above is often called the genetic algorithm with proportional selection. Several branching variants, including with random population sizes have also been proposed in the articles.[5][45][48] Particle methods, like all sampling-based approaches (e.g.,Markov Chain Monte Carlo), generate a set of samples that approximate the filtering density For example, we may haveNsamples from the approximate posterior distribution ofXk{\displaystyle X_{k}}, where the samples are labeled with superscripts as: Then, expectations with respect to the filtering distribution are approximated by with whereδa{\displaystyle \delta _{a}}stands for theDirac measureat a given state a. The functionf, in the usual way for Monte Carlo, can give all themomentsetc. of the distribution up to some approximation error. When the approximation equation (Eq. 2) is satisfied for any bounded functionfwe write Particle filters can be interpreted as a genetic type particle algorithm evolving with mutation and selection transitions. We can keep track of the ancestral lines of the particlesi=1,⋯,N{\displaystyle i=1,\cdots ,N}. The random statesξ^l,ki{\displaystyle {\widehat {\xi }}_{l,k}^{i}}, with the lower indices l=0,...,k, stands for the ancestor of the individualξ^k,ki=ξ^ki{\displaystyle {\widehat {\xi }}_{k,k}^{i}={\widehat {\xi }}_{k}^{i}}at level l=0,...,k. In this situation, we have the approximation formula with theempirical measure HereFstands for any founded function on the path space of the signal. In a more synthetic form (Eq. 3) is equivalent to Particle filters can be interpreted in many different ways. From the probabilistic point of view they coincide with amean-field particleinterpretation of the nonlinear filtering equation. The updating-prediction transitions of the optimal filter evolution can also be interpreted as the classical genetic type selection-mutation transitions of individuals. The sequential importance resampling technique provides another interpretation of the filtering transitions coupling importance sampling with the bootstrap resampling step. Last, but not least, particle filters can be seen as an acceptance-rejection methodology equipped with a recycling mechanism.[10][5] The nonlinear filtering evolution can be interpreted as a dynamical system in the set of probability measures of the formηn+1=Φn+1(ηn){\displaystyle \eta _{n+1}=\Phi _{n+1}\left(\eta _{n}\right)}whereΦn+1{\displaystyle \Phi _{n+1}}stands for some mapping from the set of probability distribution into itself. For instance, the evolution of the one-step optimal predictorηn(dxn)=p(xn|y0,⋯,yn−1)dxn{\displaystyle \eta _{n}(dx_{n})=p(x_{n}|y_{0},\cdots ,y_{n-1})dx_{n}} satisfies a nonlinear evolution starting with the probability distributionη0(dx0)=p(x0)dx0{\displaystyle \eta _{0}(dx_{0})=p(x_{0})dx_{0}}. One of the simplest ways to approximate these probability measures is to start withNindependent random variables(ξ0i)1⩽i⩽N{\displaystyle \left(\xi _{0}^{i}\right)_{1\leqslant i\leqslant N}}with common probability distributionη0(dx0)=p(x0)dx0{\displaystyle \eta _{0}(dx_{0})=p(x_{0})dx_{0}}. Suppose we have defined a sequence ofNrandom variables(ξni)1⩽i⩽N{\displaystyle \left(\xi _{n}^{i}\right)_{1\leqslant i\leqslant N}}such that At the next step we sampleN(conditionally) independent random variablesξn+1:=(ξn+1i)1⩽i⩽N{\displaystyle \xi _{n+1}:=\left(\xi _{n+1}^{i}\right)_{1\leqslant i\leqslant N}}with common law . We illustrate this mean-field particle principle in the context of the evolution of the one step optimal predictors p(xk|y0,⋯,yk−1)dxk→p(xk+1|y0,⋯,yk)=∫p(xk+1|xk′)p(yk|xk′)p(xk′|y0,⋯,yk−1)dxk′∫p(yk|xk″)p(xk″|y0,⋯,yk−1)dxk″{\displaystyle p(x_{k}|y_{0},\cdots ,y_{k-1})dx_{k}\to p(x_{k+1}|y_{0},\cdots ,y_{k})=\int p(x_{k+1}|x'_{k}){\frac {p(y_{k}|x_{k}')p(x'_{k}|y_{0},\cdots ,y_{k-1})dx'_{k}}{\int p(y_{k}|x''_{k})p(x''_{k}|y_{0},\cdots ,y_{k-1})dx''_{k}}}} Fork= 0 we use the conventionp(x0|y0,⋯,y−1):=p(x0){\displaystyle p(x_{0}|y_{0},\cdots ,y_{-1}):=p(x_{0})}. By the law of large numbers, we have in the sense that for any bounded functionf{\displaystyle f}. We further assume that we have constructed a sequence of particles(ξki)1⩽i⩽N{\displaystyle \left(\xi _{k}^{i}\right)_{1\leqslant i\leqslant N}}at some rankksuch that in the sense that for any bounded functionf{\displaystyle f}we have In this situation, replacingp(xk|y0,⋯,yk−1)dxk{\displaystyle p(x_{k}|y_{0},\cdots ,y_{k-1})dx_{k}}by theempirical measurep^(dxk|y0,⋯,yk−1){\displaystyle {\widehat {p}}(dx_{k}|y_{0},\cdots ,y_{k-1})}in the evolution equation of the one-step optimal filter stated in (Eq. 4) we find that Notice that the right hand side in the above formula is a weighted probability mixture wherep(yk|ξki){\displaystyle p(y_{k}|\xi _{k}^{i})}stands for the densityp(yk|xk){\displaystyle p(y_{k}|x_{k})}evaluated atxk=ξki{\displaystyle x_{k}=\xi _{k}^{i}}, andp(xk+1|ξki){\displaystyle p(x_{k+1}|\xi _{k}^{i})}stands for the densityp(xk+1|xk){\displaystyle p(x_{k+1}|x_{k})}evaluated atxk=ξki{\displaystyle x_{k}=\xi _{k}^{i}}fori=1,⋯,N.{\displaystyle i=1,\cdots ,N.} Then, we sampleNindependent random variable(ξk+1i)1⩽i⩽N{\displaystyle \left(\xi _{k+1}^{i}\right)_{1\leqslant i\leqslant N}}with common probability densityq^(xk+1|y0,⋯,yk){\displaystyle {\widehat {q}}(x_{k+1}|y_{0},\cdots ,y_{k})}so that Iterating this procedure, we design a Markov chain such that Notice that the optimal filter is approximated at each time step k using the Bayes' formulae The terminology "mean-field approximation" comes from the fact that we replace at each time step the probability measurep(dxk|y0,⋯,yk−1){\displaystyle p(dx_{k}|y_{0},\cdots ,y_{k-1})}by the empirical approximationp^(dxk|y0,⋯,yk−1){\displaystyle {\widehat {p}}(dx_{k}|y_{0},\cdots ,y_{k-1})}. The mean-field particle approximation of the filtering problem is far from being unique. Several strategies are developed in the books.[10][5] The analysis of the convergence of particle filters was started in 1996[2][4]and in 2000 in the book[8]and the series of articles.[48][49][50][51][52][68][69]More recent developments can be found in the books,[10][5]When the filtering equation is stable (in the sense that it corrects any erroneous initial condition), the bias and the variance of the particle particle estimates are controlled by the non asymptotic uniform estimates for any functionfbounded by 1, and for some finite constantsc1,c2.{\displaystyle c_{1},c_{2}.}In addition, for anyx⩾0{\displaystyle x\geqslant 0}: for some finite constantsc1,c2{\displaystyle c_{1},c_{2}}related to the asymptotic bias and variance of the particle estimate, and some finite constantc. The same results are satisfied if we replace the one step optimal predictor by the optimal filter approximation. Tracing back in time the ancestral lines of the individualsξ^ki(=ξ^k,ki){\displaystyle {\widehat {\xi }}_{k}^{i}\left(={\widehat {\xi }}_{k,k}^{i}\right)}andξki(=ξk,ki){\displaystyle \xi _{k}^{i}\left(={\xi }_{k,k}^{i}\right)}at every time stepk, we also have the particle approximations These empirical approximations are equivalent to the particle integral approximations for any bounded functionFon the random trajectories of the signal. As shown in[54]the evolution of the genealogical tree coincides with a mean-field particle interpretation of the evolution equations associated with the posterior densities of the signal trajectories. For more details on these path space models, we refer to the books.[10][5] We use the product formula with and the conventionsp(y0|y0,⋯,y−1)=p(y0){\displaystyle p(y_{0}|y_{0},\cdots ,y_{-1})=p(y_{0})}andp(x0|y0,⋯,y−1)=p(x0),{\displaystyle p(x_{0}|y_{0},\cdots ,y_{-1})=p(x_{0}),}fork= 0. Replacingp(xk|y0,⋯,yk−1)dxk{\displaystyle p(x_{k}|y_{0},\cdots ,y_{k-1})dx_{k}}by theempiricalapproximation in the above displayed formula, we design the following unbiased particle approximation of the likelihood function with wherep(yk|ξki){\displaystyle p(y_{k}|\xi _{k}^{i})}stands for the densityp(yk|xk){\displaystyle p(y_{k}|x_{k})}evaluated atxk=ξki{\displaystyle x_{k}=\xi _{k}^{i}}. The design of this particle estimate and the unbiasedness property has been proved in 1996 in the article.[2]Refined variance estimates can be found in[5]and.[10] Using Bayes' rule, we have the formula Notice that This implies that Replacing the one-step optimal predictorsp(xk−1|(y0,⋯,yk−2))dxk−1{\displaystyle p(x_{k-1}|(y_{0},\cdots ,y_{k-2}))dx_{k-1}}by the particleempirical measures we find that We conclude that with the backward particle approximation The probability measure is the probability of the random paths of a Markov chain(Xk,n♭)0⩽k⩽n{\displaystyle \left(\mathbb {X} _{k,n}^{\flat }\right)_{0\leqslant k\leqslant n}}running backward in time from time k=n to time k=0, and evolving at each time step k in the state space associated with the population of particlesξki,i=1,⋯,N.{\displaystyle \xi _{k}^{i},i=1,\cdots ,N.} In the above displayed formula,p^(dxk−1|ξki,(y0,⋯,yk−1)){\displaystyle {\widehat {p}}(dx_{k-1}|\xi _{k}^{i},(y_{0},\cdots ,y_{k-1}))}stands for the conditional distributionp^(dxk−1|xk,(y0,⋯,yk−1)){\displaystyle {\widehat {p}}(dx_{k-1}|x_{k},(y_{0},\cdots ,y_{k-1}))}evaluated atxk=ξki{\displaystyle x_{k}=\xi _{k}^{i}}. In the same vein,p(yk−1|ξk−1j){\displaystyle p(y_{k-1}|\xi _{k-1}^{j})}andp(ξki|ξk−1j){\displaystyle p(\xi _{k}^{i}|\xi _{k-1}^{j})}stand for the conditional densitiesp(yk−1|xk−1){\displaystyle p(y_{k-1}|x_{k-1})}andp(xk|xk−1){\displaystyle p(x_{k}|x_{k-1})}evaluated atxk=ξki{\displaystyle x_{k}=\xi _{k}^{i}}andxk−1=ξk−1j.{\displaystyle x_{k-1}=\xi _{k-1}^{j}.}These models allows to reduce integration with respect to the densitiesp((x0,⋯,xn)|(y0,⋯,yn−1)){\displaystyle p((x_{0},\cdots ,x_{n})|(y_{0},\cdots ,y_{n-1}))}in terms of matrix operations with respect to the Markov transitions of the chain described above.[55]For instance, for any functionfk{\displaystyle f_{k}}we have the particle estimates where This also shows that if then We shall assume that filtering equation is stable, in the sense that it corrects any erroneous initial condition. In this situation, theparticle approximations of the likelihood functionsare unbiased and the relative variance is controlled by for some finite constantc. In addition, for anyx⩾0{\displaystyle x\geqslant 0}: for some finite constantsc1,c2{\displaystyle c_{1},c_{2}}related to the asymptotic bias and variance of the particle estimate, and for some finite constantc. The bias and the variance ofthe particle particle estimates based on the ancestral lines of the genealogical trees are controlled by the non asymptotic uniform estimates for any functionFbounded by 1, and for some finite constantsc1,c2.{\displaystyle c_{1},c_{2}.}In addition, for anyx⩾0{\displaystyle x\geqslant 0}: for some finite constantsc1,c2{\displaystyle c_{1},c_{2}}related to the asymptotic bias and variance of the particle estimate, and for some finite constantc. The same type of bias and variance estimates hold for the backward particle smoothers. For additive functionals of the form with with functionsfk{\displaystyle f_{k}}bounded by 1, we have and for some finite constantsc1,c2,c3.{\displaystyle c_{1},c_{2},c_{3}.}More refined estimates including exponentially small probability of errors are developed in.[10] Sequential importanceResampling(SIR), Monte Carlo filtering (Kitagawa 1993[35]), bootstrap filtering algorithm (Gordon et al. 1993[37]) and single distribution resampling (Bejuri W.M.Y.B et al. 2017[70]), are also commonly applied filtering algorithms, which approximate the filtering probability densityp(xk|y0,⋯,yk){\displaystyle p(x_{k}|y_{0},\cdots ,y_{k})}by a weighted set ofNsamples Theimportance weightswk(i){\displaystyle w_{k}^{(i)}}are approximations to the relative posterior probabilities (or densities) of the samples such that Sequential importance sampling (SIS) is a sequential (i.e., recursive) version ofimportance sampling. As in importance sampling, the expectation of a functionfcan be approximated as a weighted average For a finite set of samples, the algorithm performance is dependent on the choice of theproposal distribution The "optimal" proposal distributionis given as thetarget distribution This particular choice of proposal transition has been proposed by P. Del Moral in 1996 and 1998.[4]When it is difficult to sample transitions according to the distributionp(xk|xk−1,yk){\displaystyle p(x_{k}|x_{k-1},y_{k})}one natural strategy is to use the following particle approximation with the empirical approximation associated withN(or any other large number of samples) independent random samplesXki(xk−1),i=1,⋯,N{\displaystyle X_{k}^{i}(x_{k-1}),i=1,\cdots ,N}with the conditional distribution of the random stateXk{\displaystyle X_{k}}givenXk−1=xk−1{\displaystyle X_{k-1}=x_{k-1}}. The consistency of the resulting particle filter of this approximation and other extensions are developed in.[4]In the above displayδa{\displaystyle \delta _{a}}stands for theDirac measureat a given state a. However, the transition prior probability distribution is often used as importance function, since it is easier to draw particles (or samples) and perform subsequent importance weight calculations: Sequential Importance Resampling(SIR) filters with transition prior probability distribution as importance function are commonly known asbootstrap filterandcondensation algorithm. Resamplingis used to avoid the problem of the degeneracy of the algorithm, that is, avoiding the situation that all but one of the importance weights are close to zero. The performance of the algorithm can be also affected by proper choice of resampling method. Thestratified samplingproposed by Kitagawa (1993[35]) is optimal in terms of variance. A single step of sequential importance resampling is as follows: The term "Sampling Importance Resampling" is also sometimes used when referring to SIR filters, but the termImportance Resamplingis more accurate because the word "resampling" implies that the initial sampling has already been done.[71] The "direct version" algorithm[citation needed]is rather simple (compared to other particle filtering algorithms) and it uses composition and rejection. To generate a single samplexatkfrompxk|y1:k(x|y1:k){\displaystyle p_{x_{k}|y_{1:k}}(x|y_{1:k})}: The goal is to generate P "particles" atkusing only the particles fromk−1{\displaystyle k-1}. This requires that a Markov equation can be written (and computed) to generate axk{\displaystyle x_{k}}based only uponxk−1{\displaystyle x_{k-1}}. This algorithm uses the composition of the P particles fromk−1{\displaystyle k-1}to generate a particle atkand repeats (steps 2–6) until P particles are generated atk. This can be more easily visualized ifxis viewed as a two-dimensional array. One dimension iskand the other dimension is the particle number. For example,x(k,i){\displaystyle x(k,i)}would be the ithparticle atk{\displaystyle k}and can also be writtenxk(i){\displaystyle x_{k}^{(i)}}(as done above in the algorithm). Step 3 generates apotentialxk{\displaystyle x_{k}}based on a randomly chosen particle (xk−1(i){\displaystyle x_{k-1}^{(i)}}) at timek−1{\displaystyle k-1}and rejects or accepts it in step 6. In other words, thexk{\displaystyle x_{k}}values are generated using the previously generatedxk−1{\displaystyle x_{k-1}}. Particle filters and Feynman-Kac particle methodologies find application in several contexts, as an effective mean for tackling noisy observations or strong nonlinearities, such as:
https://en.wikipedia.org/wiki/Particle_filter
Aschema(pl.:schemata) is a template incomputer scienceused in the field ofgenetic algorithmsthat identifies asubsetof strings with similarities at certain string positions. Schemata are a special case ofcylinder sets, forming abasisfor aproduct topologyon strings.[1]In other words, schemata can be used to generate atopologyon a space of strings. For example, consider binary strings of length 6. The schema 1**0*1 describes the set of all words of length 6 with 1's at the first and sixth positions and a 0 at the fourth position. The * is awildcardsymbol, which means that positions 2, 3 and 5 can have a value of either 1 or 0. Theorder of a schemais defined as the number of fixed positions in the template, while thedefining lengthδ(H){\displaystyle \delta (H)}is the distance between the first and last specific positions. The order of 1**0*1 is 3 and its defining length is 5. Thefitness of a schemais the average fitness of all strings matching the schema. The fitness of a string is a measure of the value of the encoded problem solution, as computed by a problem-specific evaluation function. The length of a schemaH{\displaystyle H}, calledN(H){\displaystyle N(H)}, is defined as the total number of nodes in the schema.N(H){\displaystyle N(H)}is also equal to the number of nodes in the programs matchingH{\displaystyle H}.[2] If the child of an individual that matches schema H does notitselfmatch H, the schema is said to have beendisrupted.[2] Inevolutionary computingsuch asgenetic algorithmsandgenetic programming,propagationrefers to the inheritance of characteristics of one generation by the next. For example, a schema is propagated if individuals in the current generation match it and so do those in the next generation. Those in the next generation may be (but do not have to be) children of parents who matched it. Recently schema have been studied usingorder theory.[3] Two basic operators are defined for schema: expansion and compression. The expansion maps a schema onto a set of words which it represents, while the compression maps a set of words on to a schema. In the following definitionsΣ{\displaystyle \Sigma }denotes an alphabet,Σl{\displaystyle \Sigma ^{l}}denotes all words of lengthl{\displaystyle l}over the alphabetΣ{\displaystyle \Sigma },Σ∗{\displaystyle \Sigma _{*}}denotes the alphabetΣ{\displaystyle \Sigma }with the extra symbol∗{\displaystyle *}.Σ∗l{\displaystyle \Sigma _{*}^{l}}denotes all schema of lengthl{\displaystyle l}over the alphabetΣ∗{\displaystyle \Sigma _{*}}as well as the empty schemaϵ∗{\displaystyle \epsilon _{*}}. For any schemas∈Σ∗l{\displaystyle s\in \Sigma _{*}^{l}}the following operator↑s{\displaystyle {\uparrow }s}, called theexpansion{\displaystyle expansion}ofs{\displaystyle s}, which mapss{\displaystyle s}to a subset of words inΣl{\displaystyle \Sigma ^{l}}: ↑s:={b∈Σl|bi=siorsi=∗for eachi∈{1,...,l}}{\displaystyle {\uparrow }s:=\{b\in \Sigma ^{l}|b_{i}=s_{i}{\mbox{ or }}s_{i}=*{\mbox{ for each }}i\in \{1,...,l\}\}} Where subscripti{\displaystyle i}denotes the character at positioni{\displaystyle i}in a word or schema. Whens=ϵ∗{\displaystyle s=\epsilon _{*}}then↑s=∅{\displaystyle {\uparrow }s=\emptyset }. More simply put,↑s{\displaystyle {\uparrow }s}is the set of all words inΣl{\displaystyle \Sigma ^{l}}that can be made by exchanging the∗{\displaystyle *}symbols ins{\displaystyle s}with symbols fromΣ{\displaystyle \Sigma }. For example, ifΣ={0,1}{\displaystyle \Sigma =\{0,1\}},l=3{\displaystyle l=3}ands=10∗{\displaystyle s=10*}then↑s={100,101}{\displaystyle {\uparrow }s=\{100,101\}}. Conversely, for anyA⊆Σl{\displaystyle A\subseteq \Sigma ^{l}}we define↓A{\displaystyle {\downarrow }{A}}, called thecompression{\displaystyle compression}ofA{\displaystyle A}, which mapsA{\displaystyle A}on to a schemas∈Σ∗l{\displaystyle s\in \Sigma _{*}^{l}}:↓A:=s{\displaystyle {\downarrow }A:=s}wheres{\displaystyle s}is a schema of lengthl{\displaystyle l}such that the symbol at positioni{\displaystyle i}ins{\displaystyle s}is determined in the following way: ifxi=yi{\displaystyle x_{i}=y_{i}}for allx,y∈A{\displaystyle x,y\in A}thensi=xi{\displaystyle s_{i}=x_{i}}otherwisesi=∗{\displaystyle s_{i}=*}. IfA=∅{\displaystyle A=\emptyset }then↓A=ϵ∗{\displaystyle {\downarrow }A=\epsilon _{*}}. One can think of this operator as stacking up all the items inA{\displaystyle A}and if all elements in a column are equivalent, the symbol at that position ins{\displaystyle s}takes this value, otherwise there is a wild card symbol. For example, letA={100,000,010}{\displaystyle A=\{100,000,010\}}then↓A=∗∗0{\displaystyle {\downarrow }A=**0}. Schemata can bepartially ordered. For anya,b∈Σ∗l{\displaystyle a,b\in \Sigma _{*}^{l}}we saya≤b{\displaystyle a\leq b}if and only if↑a⊆↑b{\displaystyle {\uparrow }a\subseteq {\uparrow }b}. It follows that≤{\displaystyle \leq }is apartial orderingon a set of schemata from thereflexivity,antisymmetryandtransitivityof thesubsetrelation. For example,ϵ∗≤11≤1∗≤∗∗{\displaystyle \epsilon _{*}\leq 11\leq 1*\leq **}. This is because↑ϵ∗⊆↑11⊆↑1∗⊆↑∗∗=∅⊆{11}⊆{11,10}⊆{11,10,01,00}{\displaystyle {\uparrow }\epsilon _{*}\subseteq {\uparrow }11\subseteq {\uparrow }1*\subseteq {\uparrow }**=\emptyset \subseteq \{11\}\subseteq \{11,10\}\subseteq \{11,10,01,00\}}. The compression and expansion operators form aGalois connection, where↓{\displaystyle \downarrow }is the lower adjoint and↑{\displaystyle \uparrow }the upper adjoint.[3] For a setA⊆Σl{\displaystyle A\subseteq \Sigma ^{l}}, we call the process of calculating the compression on each subset of A, that is{↓X|X⊆A}{\displaystyle \{{\downarrow }X|X\subseteq A\}}, the schematic completion ofA{\displaystyle A}, denotedS(A){\displaystyle {\mathcal {S}}(A)}.[3] For example, letA={110,100,001,000}{\displaystyle A=\{110,100,001,000\}}. The schematic completion ofA{\displaystyle A}, results in the following set:S(A)={001,100,000,110,00∗,∗00,1∗0,∗∗0,∗0∗,∗∗∗,ϵ∗}{\displaystyle {\mathcal {S}}(A)=\{001,100,000,110,00*,*00,1*0,**0,*0*,***,\epsilon _{*}\}} Theposet(S(A),≤){\displaystyle ({\mathcal {S}}(A),\leq )}always forms acomplete latticecalled the schematic lattice. The schematic lattice is similar to the concept lattice found inFormal concept analysis.
https://en.wikipedia.org/wiki/Propagation_of_schema
Universal Darwinism, also known asgeneralized Darwinism,universal selection theory,[1]orDarwinian metaphysics,[2][3][4]is a variety of approaches that extend the theory ofDarwinismbeyond its original domain ofbiological evolutionon Earth. Universal Darwinism aims to formulate a generalized version of the mechanisms ofvariation,selectionandheredityproposed byCharles Darwin, so that they can apply to explainevolutionin a wide variety of other domains, includingpsychology,linguistics,economics,culture,medicine,computer science, andphysics. At the most fundamental level,Charles Darwin's theory ofevolutionstates that organisms evolve andadaptto their environment by an iterative process. This process can be conceived as anevolutionary algorithmthat searches the space of possible forms (thefitness landscape) for the ones that are best adapted. The process has three components: After those fit variants are retained, they can again undergo variation, either directly or in their offspring, starting a new round of theiteration. The overall mechanism is similar to the problem-solving procedures oftrial-and-erroror generate-and-test: evolution can be seen as searching for the best solution for the problem of how to survive and reproduce by generating new trials, testing how well they perform, eliminating the failures, and retaining the successes. The generalization made in "universal" Darwinism is to replace "organism" by any recognizable pattern, phenomenon, or system. The first requirement is that the pattern can "survive" (maintain, be retained) long enough or "reproduce" (replicate, be copied) sufficiently frequently so as not to disappear immediately. This is the heredity component: the information in the pattern must be retained or passed on. The second requirement is that during survival and reproduction variation (small changes in the pattern) can occur. The final requirement is that there is a selective "preference" so that certain variants tend to survive or reproduce "better" than others. If these conditions are met, then, by the logic of natural selection, the pattern will evolve towards more adapted forms. Examples of patterns that have been postulated to undergo variation and selection, and thus adaptation, aregenes, ideas (memes), theories, technologies,neuronsand their connections, words, computer programs, firms,antibodies, institutions, law and judicial systems, quantum states and even whole universes.[5] Conceptually, "evolutionary theorizing about cultural, social, and economic phenomena" preceded Darwin,[6]but was still lacking the concept of natural selection. Darwin himself, together with subsequent 19th-century thinkers such asHerbert Spencer,Thorstein Veblen,James Mark BaldwinandWilliam James, was quick to apply the idea of selection to other domains, such as language, psychology, society, and culture.[7]However, this evolutionary tradition was largely banned from the social sciences in the beginning of the 20th century, in part because of the bad reputation ofsocial Darwinism, an attempt to use Darwinism to justify social inequality.[citation needed] Starting in the 1950s,Donald T. Campbellwas one of the first and most influential authors to revive the tradition, and to formulate a generalized Darwinianalgorithmdirectly applicable to phenomena outside of biology.[8]In this, he was inspired byWilliam Ross Ashby's view ofself-organizationand intelligence as fundamental processes of selection.[9]His aim was to explain the development ofscienceand other forms ofknowledgeby focusing on the variation and selection of ideas and theories, thus laying the basis for the domain ofevolutionary epistemology. In the 1990s, Campbell's formulation of the mechanism of "blind-variation-and-selective-retention" (BVSR) was further developed and extended to other domains under the labels of "universal selection theory"[10]or "universal selectionism"[11]by his disciplesGary Cziko,[12][13]Mark Bickhard,[14]andFrancis Heylighen.[15][16] Richard Dawkinsmay have first coined the term "universal Darwinism" in 1983 to describe his conjecture that any possible life forms existing outside theSolar Systemwould evolve by natural selection just as they do on Earth.[17]This conjecture was also presented in 1983 in a paper entitled “The Darwinian Dynamic” that dealt with the evolution of order in living systems and certain nonliving physical systems.[18]It was suggested “that ‘life’, wherever it might exist in the universe, evolves according to the same dynamical law” termed the Darwinian dynamic.Henry Plotkinin his 1997 book[19]onDarwin machinesmakes the link between universal Darwinism and Campbell's evolutionary epistemology.Susan Blackmore, in her 1999 bookThe Meme Machine, devotes a chapter titled 'Universal Darwinism' to a discussion of the applicability of the Darwinian process to a wide range of scientific subject matters. The philosopher of mindDaniel Dennett, in his 1995 bookDarwin's Dangerous Idea, developed the idea of a Darwinian process, involving variation, selection and retention, as a generic algorithm that is substrate-neutral and could be applied to many fields of knowledge outside of biology. He described the idea of natural selection as a "universal acid" that cannot be contained in any vessel, as it seeps through the walls and spreads ever further, touching and transforming ever more domains. He notes in particular the field ofmemeticsin the social sciences.[20][13] In agreement with Dennett's prediction, over the past decades the Darwinian perspective has spread ever more widely, in particular across thesocial sciencesas the foundation for numerous schools of study includingmemetics,evolutionary economics,evolutionary psychology,evolutionary anthropology,neural Darwinism, andevolutionary linguistics.[21]Researchers have postulated Darwinian processes as operating at the foundations of physics, cosmology and chemistry via the theories ofquantum Darwinism,[22]observation selection effectsandcosmological natural selection.[23][24]Similar mechanisms are extensively applied incomputer sciencein the domains ofgenetic algorithmsandevolutionary computation, which develop solutions to complex problems via a process of variation and selection. Author D. B. Kelley has formulated one of the most all-encompassing approaches to universal Darwinism. In his 2013 bookThe Origin of Phenomena, he holds thatnatural selectioninvolves not the preservation of favored races in the struggle for life, as shown byDarwin, but the preservation of favored systems in contention for existence. The fundamental mechanism behind all such stability and evolution is therefore what Kelley calls "survival of the fittestsystems."[25]Because all systems are cyclical, the Darwinian processes ofiteration, variation andselectionare operative not only among species but among all natural phenomena both large-scale and small. Kelley thus maintains that, since theBig Bangespecially, theuniversehas evolved from a highly chaotic state to one that is now highly ordered with many stable phenomena, naturally selected.[25] The following approaches can all be seen as exemplifying a generalization of Darwinian ideas outside of their original domain of biology. These "Darwinian extensions" can be grouped in two categories, depending on whether they discuss implications of biological (genetic) evolution in other disciplines (e.g. medicine or psychology), or discuss processes of variation and selection of entities other than genes (e.g. computer programs, firms or ideas). However, there is no strict separation possible, since most of these approaches (e.g. in sociology, psychology and linguistics) consider both genetic and non-genetic (e.g. cultural) aspects of evolution, as well as the interactions between them (see e.g.gene-culture coevolution).
https://en.wikipedia.org/wiki/Universal_Darwinism
Incomputer scienceandmathematical optimization, ametaheuristicis a higher-levelprocedureorheuristicdesigned to find, generate, tune, or select a heuristic (partialsearch algorithm) that may provide a sufficiently good solution to anoptimization problemor amachine learningproblem, especially with incomplete or imperfect information or limited computation capacity.[1][2][3][4]Metaheuristics sample a subset of solutions which is otherwise too large to be completely enumerated or otherwise explored. Metaheuristics may make relatively few assumptions about the optimization problem being solved and so may be usable for a variety of problems.[1][5][6]Their use is always of interest when exact or other (approximate) methods are not available or are not expedient, either because the calculation time is too long or because, for example, the solution provided is too imprecise. Compared tooptimization algorithmsanditerative methods, metaheuristics do not guarantee that aglobally optimal solutioncan be found on some class of problems.[4]Many metaheuristics implement some form ofstochastic optimization, so that the solution found is dependent on the set ofrandom variablesgenerated.[3]Incombinatorial optimization, there are many problems that belong to the class ofNP-completeproblems and thus can no longer be solved exactly in an acceptable time from a relatively low degree of complexity.[7][8]Metaheuristics then often provide good solutions with less computational effort than approximation methods, iterative methods, or simple heuristics.[4][1]This also applies in the field of continuous or mixed-integer optimization.[1][9][10]As such, metaheuristics are useful approaches for optimization problems.[3]Several books and survey papers have been published on the subject.[3][4][1][11][12]Literature review on metaheuristic optimization,[13]suggested that it was Fred Glover who coined the word metaheuristics.[14] Most literature on metaheuristics is experimental in nature, describing empirical results based oncomputer experimentswith the algorithms. But some formal theoretical results are also available, often onconvergenceand the possibility of finding the global optimum.[4][15]Also worth mentioning are theno-free-lunch theorems, which state that there can be no metaheuristic that is better than all others for any given problem. Especially since the turn of the millennium, many metaheuristic methods have been published with claims of novelty and practical efficacy. While the field also features high-quality research, many of the more recent publications have been of poor quality; flaws include vagueness, lack of conceptual elaboration, poor experiments, and ignorance of previous literature.[16][17] These are properties that characterize most metaheuristics:[4] There are a wide variety of metaheuristics[3][1]and a number of properties with respect to which to classify them.[4][24][25][26]The following list is therefore to be understood as an example. One approach is to characterize the type of search strategy.[4]One type of search strategy is an improvement on simple local search algorithms. A well known local search algorithm is thehill climbingmethod which is used to find local optimums. However, hill climbing does not guarantee finding global optimum solutions. Many metaheuristic ideas were proposed to improve local search heuristic in order to find better solutions. Such metaheuristics includesimulated annealing,tabu search,iterated local search,variable neighborhood search, andGRASP.[4]These metaheuristics can both be classified as local search-based or global search metaheuristics. Other global search metaheuristic that are not local search-based are usuallypopulation-basedmetaheuristics. Such metaheuristics includeant colony optimization,evolutionary computationsuch asgenetic algorithmorevolution strategies,particle swarm optimization,rider optimization algorithm[27]and bacterial foraging algorithm.[28] Another classification dimension is single solution vspopulation-basedsearches.[4][12]Single solution approaches focus on modifying and improving a single candidate solution; single solution metaheuristics includesimulated annealing,iterated local search,variable neighborhood search, andguided local search.[12]Population-based approaches maintain and improve multiple candidate solutions, often using population characteristics to guide the search; population based metaheuristics includeevolutionary computationandparticle swarm optimization.[12]Another category of metaheuristics isSwarm intelligencewhich is a collective behavior of decentralized, self-organized agents in a population or swarm.Ant colony optimization,[29]particle swarm optimization,[12]social cognitive optimizationand bacterial foraging algorithm[28]are examples of this category. A hybrid metaheuristic is one that combines a metaheuristic with other optimization approaches, such as algorithms frommathematical programming,constraint programming, andmachine learning. Both components of a hybrid metaheuristic may run concurrently and exchange information to guide the search. On the other hand,Memetic algorithms[30]represent the synergy of evolutionary or any population-based approach with separate individual learning or local improvement procedures for problem search. An example of memetic algorithm is the use of a local search algorithm instead of or in addition to a basicmutation operatorin evolutionary algorithms. Aparallel metaheuristicis one that uses the techniques ofparallel programmingto run multiple metaheuristic searches in parallel; these may range from simpledistributedschemes to concurrent search runs that interact to improve the overall solution. With population-based metaheuristics, the population itself can be parallelized by either processing each individual or group with a separate thread or the metaheuristic itself runs on one computer and the offspring are evaluated in a distributed manner per iteration.[31]The latter is particularly useful if the computational effort for the evaluation is considerably greater than that for the generation of descendants. This is the case in many practical applications, especially in simulation-based calculations of solution quality.[32][33] A very active area of research is the design of nature-inspired metaheuristics. Many recent metaheuristics, especially evolutionary computation-based algorithms, are inspired by natural systems. Nature acts as a source of concepts, mechanisms and principles for designing of artificial computing systems to deal with complex computational problems. Such metaheuristics includesimulated annealing,evolutionary algorithms,ant colony optimizationandparticle swarm optimization. A large number of more recent metaphor-inspired metaheuristics have started toattract criticism in the research communityfor hiding their lack of novelty behind an elaborate metaphor.[16][17][25][34]As a result, a number of renowned scientists of the field have proposed a research agenda for the standardization of metaheuristics in order to make them more comparable, among other things.[35]Another consequence is that the publication guidelines of a number of scientific journals have been adapted accordingly.[36][37][38] Most metaheuristics are search methods and when using them, the evaluation function should be subject to greater demands than a mathematical optimization. Not only does the desired target state have to be formulated, but the evaluation should also reward improvements to a solution on the way to the target in order to support and accelerate the search process. Thefitness functionsof evolutionary or memetic algorithms can serve as an example. Metaheuristics are used for all types of optimization problems, ranging fromcontinuousthrough mixed integer problems tocombinatorial optimizationor combinations thereof.[9][39][40]In combinatorial optimization, an optimal solution is sought over adiscretesearch-space. An example problem is thetravelling salesman problemwhere the search-space of candidate solutions grows faster thanexponentiallyas the size of the problem increases, which makes anexhaustive searchfor the optimal solution infeasible.[41][42]Additionally, multidimensional combinatorial problems, including most design problems inengineering[6][43][44][45]such as form-finding and behavior-finding, suffer from thecurse of dimensionality, which also makes them infeasible for exhaustive search oranalytical methods. Metaheuristics are also frequently applied to scheduling problems. A typical representative of this combinatorial task class is job shop scheduling, which involves assigning the work steps of jobs to processing stations in such a way that all jobs are completed on time and altogether in the shortest possible time.[5][46]In practice, restrictions often have to be observed, e.g. by limiting the permissible sequence of work steps of a job through predefined workflows[47]and/or with regard to resource utilisation, e.g. in the form of smoothing the energy demand.[48][49]Popular metaheuristics for combinatorial problems includegenetic algorithmsby Holland et al.,[50]scatter search[51]andtabu search[52]by Glover. Another large field of application are optimization tasks in continuous or mixed-integer search spaces. This includes, e.g., design optimization[6][53][54]or various engineering tasks.[55][56][57]An example of the mixture of combinatorial and continuous optimization is the planning of favourable motion paths for industrial robots.[58][59] A MOF can be defined as ‘‘a set of software tools that provide a correct and reusable implementation of a set of metaheuristics, and the basic mechanisms to accelerate the implementation of its partner subordinate heuristics (possibly including solution encodings and technique-specific operators), which are necessary to solve a particular problem instance using techniques provided’’.[60] There are many candidate optimization tools which can be considered as a MOF of varying feature. The following list of 33 MOFs is compared and evaluated in detail in:[60]Comet, EvA2, evolvica, Evolutionary::Algorithm, GAPlayground, jaga, JCLEC, JGAP, jMetal, n-genes, Open Beagle, Opt4j, ParadisEO/EO, Pisa, Watchmaker, FOM, Hypercube, HotFrame, Templar, EasyLocal, iOpt, OptQuest, JDEAL, Optimization Algorithm Toolkit, HeuristicLab, MAFRA, Localizer, GALIB, DREAM, Discropt, MALLBA, MAGMA, and UOF. There have been a number of publications on the support of parallel implementations, which was missing in this comparative study, particularly from the late 10s onwards.[32][33][61][62][63] Many different metaheuristics are in existence and new variants are continually being proposed. Some of the most significant contributions to the field are:
https://en.wikipedia.org/wiki/Metaheuristics
The followingoutlineis provided as an overview of and topical guide to computer vision: Computer vision–interdisciplinary fieldthat deals with how computers can be made to gain high-level understanding fromdigital imagesorvideos. From the perspective ofengineering, it seeks to automate tasks that the human visual system can do.[1][2][3]Computer vision tasks include methods for acquiring digital images (throughimage sensors),image processing, andimage analysis, to reach an understanding of digital images. In general, it deals with the extraction of high-dimensional data from the real world in order to produce numerical or symbolic information that the computer can interpret. The image data can take many forms, such as video sequences, views from multiple cameras, or multi-dimensional data from a medical scanner. As a technological discipline, computer vision seeks to apply its theories and models for the construction of computer vision systems. As a scientific discipline, computer vision is concerned with the theory behind artificial systems that extract information from images. History of computer vision
https://en.wikipedia.org/wiki/Outline_of_computer_vision
Thefollowingoutlineis provided as an overview of and topical guide to robotics: Roboticsis a branch of mechanical engineering, electrical engineering and computer science that deals with the design, construction, operation, and application of robots, as well as computer systems for their control, sensory feedback, and information processing. These technologies deal with automated machines that can take the place of humans in dangerous environments or manufacturing processes, or resemble humans in appearance, behaviour, and or cognition. Many of today's robots are inspired by nature contributing to the field of bio-inspired robotics. The word "robot" was introduced to the public by Czech writerKarel Čapekin his playR.U.R. (Rossum's Universal Robots), published in 1920. The term "robotics" was coined byIsaac Asimovin his 1941 science fiction short-story "Liar!"[1] Robotics can be described as: Robotics incorporates aspects of many disciplines includingelectronics,engineering,mechanics,softwareandarts. The design and control of robots relies on many fields knowledge, including: Arobotis a machine—especially one programmable by a computer—capable of carrying out a complex series of actions automatically. A robot can be guided by an external control device, or the control may be embedded within. Autonomous robots– robots that are not controlled by humans: Mobile robots may be classified by: History of robots Robot competition
https://en.wikipedia.org/wiki/Outline_of_robotics
Theaccuracy paradoxis theparadoxicalfinding thataccuracyis not a good metric forpredictive modelswhenclassifyinginpredictive analytics. This is because a simple model may have a high level of accuracy but too crude to be useful. For example, if the incidence of category A is dominant, being found in 99% of cases, then predicting thateverycase is category A will have an accuracy of 99%.Precision and recallare better measures in such cases.[1][2]The underlying issue is that there is a class imbalance between the positive class and the negative class. Prior probabilities for these classes need to be accounted for in error analysis. Precision and recall help, but precision too can be biased by unbalanced class priors in the test sets.[citation needed] For example, a city of 1 million people has ten terrorists. A profiling system results in the followingconfusion matrix: Even though the accuracy is⁠10 + 999000/1000000⁠≈ 99.9%, 990 out of the 1000 positive predictions are incorrect. The precision of⁠10/10 + 990⁠= 1% reveals its poor performance. As the classes are so unbalanced, a better metric is theF1 score=⁠2 × 0.01 × 1/0.01 + 1⁠≈ 2% (the recall being⁠10 + 0/10⁠= 1). Thisstatistics-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Accuracy_paradox
Action model learning(sometimes abbreviatedaction learning) is an area ofmachine learningconcerned with the creation and modification of asoftware agent's knowledge about theeffectsandpreconditionsof theactionsthat can be executed within itsenvironment. This knowledge is usually represented in a logic-basedaction description languageand used as input forautomated planners. Learning action models is important when goals change. When an agent acted for a while, it can use its accumulated knowledge about actions in the domain to make better decisions. Thus, learning action models differs fromreinforcement learning. It enables reasoning about actions instead of expensive trials in the world.[1]Action model learning is a form ofinductive reasoning, where new knowledge is generated based on the agent'sobservations. The usual motivation for action model learning is the fact that manual specification of action models for planners is often a difficult, time consuming, and error-prone task (especially in complex environments). Given atraining setE{\displaystyle E}consisting of examplese=(s,a,s′){\displaystyle e=(s,a,s')}, wheres,s′{\displaystyle s,s'}are observations of a world state from two consecutive time stepst,t′{\displaystyle t,t'}anda{\displaystyle a}is anaction instanceobserved in time stept{\displaystyle t}, the goal of action model learning in general is to construct anaction model⟨D,P⟩{\displaystyle \langle D,P\rangle }, whereD{\displaystyle D}is a description of domain dynamics in action description formalism likeSTRIPS,ADLorPDDLandP{\displaystyle P}is a probability function defined over the elements ofD{\displaystyle D}.[2]However, many state of the artaction learning methodsassume determinism and do not induceP{\displaystyle P}. In addition to determinism, individual methods differ in how they deal with other attributes of domain (e.g. partial observability or sensoric noise). Recent action learning methods take various approaches and employ a wide variety of tools from different areas ofartificial intelligenceandcomputational logic. As an example of a method based on propositional logic, we can mention SLAF (Simultaneous Learning and Filtering) algorithm,[1]which uses agent's observations to construct a long propositional formula over time and subsequently interprets it using asatisfiability (SAT) solver. Another technique, in which learning is converted into a satisfiability problem (weightedMAX-SATin this case) and SAT solvers are used, is implemented in ARMS (Action-Relation Modeling System).[3]Two mutually similar, fully declarative approaches to action learning were based on logic programming paradigmAnswer Set Programming(ASP)[4]and its extension, Reactive ASP.[5]In another example, bottom-upinductive logic programmingapproach was employed.[6]Several different solutions are not directly logic-based. For example, the action model learning using aperceptron algorithm[7]or the multi levelgreedy searchover the space of possible action models.[8]In the older paper from 1992,[9]the action model learning was studied as an extension ofreinforcement learning. Most action learning research papers are published in journals and conferences focused onartificial intelligencein general (e.g. Journal of Artificial Intelligence Research (JAIR), Artificial Intelligence, Applied Artificial Intelligence (AAI) or AAAI conferences). Despite mutual relevance of the topics, action model learning is usually not addressed inplanningconferences like theInternational Conference on Automated Planning and Scheduling(ICAPS).
https://en.wikipedia.org/wiki/Action_model_learning
Activity recognitionaims to recognize the actions and goals of one or more agents from a series of observations on the agents' actions and the environmental conditions. Since the 1980s, this research field has captured the attention of severalcomputer sciencecommunities due to its strength in providing personalized support for many different applications and its connection to many different fields of study such as medicine,human-computer interaction, or sociology. Due to its multifaceted nature, different fields may refer to activity recognition as plan recognition, goal recognition, intent recognition, behavior recognition, location estimation andlocation-based services. Sensor-based activity recognition integrates the emerging area of sensor networks with noveldata miningandmachine learningtechniques to model a wide range of human activities.[1][2]Mobile devices (e.g. smart phones) provide sufficient sensor data and calculation power to enable physical activity recognition to provide an estimation of the energy consumption during everyday life. Sensor-based activity recognition researchers believe that by empoweringubiquitous computersand sensors to monitor the behavior of agents (under consent), these computers will be better suited to act on our behalf. Visual sensors that incorporate color and depth information, such as theKinect, allow more accurate automatic action recognition and fuse many emerging applications such as interactive education[3]and smart environments.[4]Multiple views of visual sensor enable the development of machine learning for automatic view invariant action recognition.[5]More advanced sensors used in 3Dmotion capturesystems allow highly accurate automatic recognition, in the expenses of more complicated hardware system setup.[6] Sensor-based activity recognition is a challenging task due to the inherent noisy nature of the input. Thus,statistical modelinghas been the main thrust in this direction in layers, where the recognition at several intermediate levels is conducted and connected. At the lowest level where the sensor data are collected, statistical learning concerns how to find the detailed locations of agents from the received signal data. At an intermediate level,statistical inferencemay be concerned about how to recognize individuals' activities from the inferred location sequences and environmental conditions at the lower levels. Furthermore, at the highest level, a major concern is to find out the overall goal or subgoals of an agent from the activity sequences through a mixture of logical and statistical reasoning. Recognizing activities for multiple users using on-body sensors first appeared in the work by ORL using active badge systems[7]in the early 1990s. Other sensor technology such as acceleration sensors were used for identifying group activity patterns during office scenarios.[8]Activities of Multiple Users in intelligent environments are addressed in Guet al.[9]In this work, they investigate the fundamental problem of recognizing activities for multiple users from sensor readings in a home environment, and propose a novel pattern mining approach to recognize both single-user and multi-user activities in a unified solution. Recognition of group activities is fundamentally different from single, or multi-user activity recognition in that the goal is to recognize the behavior of the group as an entity, rather than the activities of the individual members within it.[10]Group behavior is emergent in nature, meaning that the properties of the behavior of the group are fundamentally different than the properties of the behavior of the individuals within it, or any sum of that behavior.[11]The main challenges are in modeling the behavior of the individual group members, as well as the roles of the individual within the group dynamic[12]and their relationship to emergent behavior of the group in parallel.[13]Challenges which must still be addressed include quantification of the behavior and roles of individuals who join the group, integration of explicit models for role description into inference algorithms, and scalability evaluations for very large groups and crowds. Group activity recognition has applications for crowd management and response in emergency situations, as well as forsocial networkingandQuantified Selfapplications.[14] Logic-based approaches keep track of alllogically consistentexplanations of the observed actions. Thus, all possible and consistent plans or goals must be considered. Kautz provided a formal theory of plan recognition. He described plan recognition as a logical inference process of circumscription. All actions and plans are uniformly referred to as goals, and a recognizer's knowledge is represented by a set of first-order statements, called event hierarchy. Event hierarchy is encoded in first-order logic, which defines abstraction, decomposition and functional relationships between types of events.[15] Kautz's general framework for plan recognition has an exponential time complexity in worst case, measured in the size of the input hierarchy. Lesh and Etzioni went one step further and presented methods in scaling up goal recognition to scale up his work computationally. In contrast to Kautz's approach where the plan library is explicitly represented, Lesh and Etzioni's approach enables automatic plan-library construction from domain primitives. Furthermore, they introduced compact representations and efficient algorithms for goal recognition on large plan libraries.[16] Inconsistent plans and goals are repeatedly pruned when new actions arrive. Besides, they also presented methods for adapting a goal recognizer to handle individual idiosyncratic behavior given a sample of an individual's recent behavior. Pollack et al. described a direct argumentation model that can know about the relative strength of several kinds of arguments for belief and intention description. A serious problem of logic-based approaches is their inability or inherent infeasibility to represent uncertainty. They offer no mechanism for preferring one consistent approach to another and are incapable of deciding whether one particular plan is more likely than another, as long as both of them can be consistent enough to explain the actions observed. There is also a lack of learning ability associated with logic based methods. Another approach to logic-based activity recognition is to use stream reasoning based onanswer set programming,[17]and has been applied to recognising activities for health-related applications,[18]which uses weak constraints to model a degree of ambiguity/uncertainty. Probability theory and statistical learning models are more recently applied in activity recognition to reason about actions, plans and goals under uncertainty.[19]In the literature, there have been several approaches which explicitly represent uncertainty in reasoning about an agent's plans and goals. Using sensor data as input, Hodges and Pollack designed machine learning-based systems for identifying individuals as they perform routine daily activities such as making coffee.[20]Intel Research (Seattle) Laband University of Washington at Seattle have done some important works on using sensors to detect human plans.[21][22][23]Some of these works infer user transportation modes from readings of radio-frequency identifiers (RFID) and global positioning systems (GPS). The use of temporal probabilistic models has been shown to perform well in activity recognition and generally outperform non-temporal models.[24]Generative models such as the Hidden Markov Model (HMM) and the more generally formulated Dynamic Bayesian Networks (DBN) are popular choices in modelling activities from sensor data.[25][26][27][28]Discriminative models such as Conditional Random Fields (CRF) are also commonly applied and also give good performance in activity recognition.[29][30] Generative and discriminative models both have their pros and cons and the ideal choice depends on their area of application. A dataset together with implementations of a number of popular models (HMM, CRF) for activity recognition can be foundhere. Conventional temporal probabilistic models such as the hidden Markov model (HMM) and conditional random fields (CRF) model directly model the correlations between the activities and the observed sensor data. In recent years, increasing evidence has supported the use of hierarchical models which take into account the rich hierarchical structure that exists in human behavioral data.[26][31][32]The core idea here is that the model does not directly correlate the activities with the sensor data, but instead breaks the activity into sub-activities (sometimes referred to as actions) and models the underlying correlations accordingly. An example could be the activity of preparing a stir fry, which can be broken down into the subactivities or actions of cutting vegetables, frying the vegetables in a pan and serving it on a plate. Examples of such a hierarchical model are Layered Hidden Markov Models (LHMMs)[31]and the hierarchical hidden Markov model (HHMM), which have been shown to significantly outperform its non-hierarchical counterpart in activity recognition.[26] Different from traditional machine learning approaches, an approach based on data mining has been recently proposed. In the work of Gu et al., the problem of activity recognition is formulated as a pattern-based classification problem. They proposed a data mining approach based on discriminative patterns which describe significant changes between any two activity classes of data to recognize sequential, interleaved and concurrent activities in a unified solution.[33]Gilbertet al.use 2D corners in both space and time. These are grouped spatially and temporally using a hierarchical process, with an increasing search area. At each stage of the hierarchy, the most distinctive and descriptive features are learned efficiently through data mining (Apriori rule).[34] Location-based activity recognition can also rely onGPSdata to recognize activities.[35][36] It is a very important and challenging problem to track and understand the behavior of agents through videos taken by various cameras. The primary technique employed isComputer Vision. Vision-based activity recognition has found many applications such as human-computer interaction, user interface design,robot learning, and surveillance, among others. Scientific conferences where vision based activity recognition work often appears areICCVandCVPR. In vision-based activity recognition, a great deal of work has been done. Researchers have attempted a number of methods such asoptical flow,Kalman filtering,Hidden Markov models, etc., under different modalities such as single camera,stereo, and infrared. In addition, researchers have considered multiple aspects on this topic, including single pedestrian tracking, group tracking, and detecting dropped objects. Recently some researchers have usedRGBD cameraslike Microsoft Kinect to detect human activities.[37]Depth cameras add extra dimension i.e. depth which normal 2d camera fails to provide. Sensory information from these depth cameras have been used to generate real-time skeleton model of humans with different body positions.[38]This skeleton information provides meaningful information that researchers have used to model human activities which are trained and later used to recognize unknown activities.[39][40] With the recent emergency of deep learning, RGB video based activity recognition has seen rapid development. It uses videos captured by RGB cameras as input and perform several tasks, including: video classification, detection of activity start and end in videos, and spatial-temporal localization of activity and the people performing the activity.[41]Pose estimation methods[42]allow extracting more representative skeletal features for action recognition.[43]That said, it has been discovered that deep learning based action recognition may suffer from adversarial attacks, where an attacker alter the input insignificantly to fool an action recognition system.[44] Despite remarkable progress of vision-based activity recognition, its usage for most actual visual surveillance applications remains a distant aspiration.[45]Conversely, the human brain seems to have perfected the ability to recognize human actions. This capability relies not only on acquired knowledge, but also on the aptitude of extracting information relevant to a given context and logical reasoning. Based on this observation, it has been proposed to enhance vision-based activity recognition systems by integratingcommonsense reasoningand, contextual andcommonsense knowledge. Hierarchical Human Activity (HAR) Recognition Hierarchical human activity recognition is a technique within computer vision and machine learning. It aims to identify and comprehend human actions or behaviors from visual data. This method entails structuring activities hierarchically, creating a framework that represents connections and interdependencies among various actions.[46]HAR techniques can be used to understand data correlations and model fundamentals to improve models, to balance accuracy and privacy concerns in sensitive application areas, and to identify and manage trivial labels that have no relevance in specific use cases.[47] In vision-based activity recognition, the computational process is often divided into four steps, namely human detection, human tracking, human activity recognition and then a high-level activity evaluation. Incomputer vision-based activity recognition, fine-grained action localization typically provides per-image segmentation masks delineating the human object and its action category (e.g.,Segment-Tube[48]). Techniques such as dynamicMarkov Networks,CNNandLSTMare often employed to exploit the semantic correlations between consecutive video frames. Geometric fine-grained features such as objective bounding boxes and human poses facilitate activity recognition withgraph neural network.[41][49] One way to identify specific people is by how they walk. Gait-recognition software can be used to record a person's gait or gait feature profile in a database for the purpose of recognizing that person later, even if they are wearing a disguise. When activity recognition is performed indoors and in cities using the widely availableWi-Fisignals and802.11access points, there is much noise and uncertainty. These uncertainties can be modeled using a dynamicBayesian networkmodel.[50]In a multiple goal model that can reason about user's interleaving goals, adeterministicstate transition model is applied.[51]Another possible method models the concurrent and interleaving activities in a probabilistic approach.[52]A user action discovery model could segment Wi-Fi signals to produce possible actions.[53] One of the primary thought of Wi-Fi activity recognition is that when the signal goes through the human body during transmission; which causes reflection, diffraction, and scattering. Researchers can get information from these signals to analyze the activity of the human body. As shown in,[54]when wireless signals are transmitted indoors, obstacles such as walls, the ground, and the human body cause various effects such as reflection, scattering, diffraction, and diffraction. Therefore, receiving end receives multiple signals from different paths at the same time, because surfaces reflect the signal during the transmission, which is known asmultipath effect. The static model is based on these two kinds of signals: the direct signal and the reflected signal. Because there is no obstacle in the direct path, direct signal transmission can be modeled byFriis transmission equation: If we consider the reflected signal, the new equation is: When human shows up, we have a new transmission path. Therefore, the final equation is: Δ{\displaystyle \Delta }is the approximate difference of the path caused by human body. In this model, we consider the human motion, which causes the signal transmission path to change continuously. We can use Doppler Shift to describe this effect, which is related to the motion speed. By calculating the Doppler Shift of the receiving signal, we can figure out the pattern of the movement, thereby further identifying human activity. For example, in,[55]the Doppler shift is used as a fingerprint to achieve high-precision identification for nine different movement patterns. TheFresnel zonewas initially used to study the interference and diffraction of the light, which is later used to construct the wireless signal transmission model. Fresnel zone is a series of elliptical intervals whose foci are the positions of the sender and receiver. When a person is moving across different Fresnel zones, the signal path formed by the reflection of the human body changes, and if people move vertically through Fresnel zones, the change of signal will be periodic. In a pair of papers, Wang et.al. applied the Fresnel model to the activity recognition task and got a more accurate result.[56][57] In some tasks, we should consider modeling the human body accurately to achieve better results. For example,[57]described the human body as concentric cylinders for breath detection. The outside of the cylinder denotes the rib cage when people inhale, and the inside denotes that when people exhale. So the difference between the radius of that two cylinders represents the moving distance during breathing. The change of the signal phases can be expressed in the following equation: There are some popular datasets that are used for benchmarking activity recognition or action recognition algorithms. By automatically monitoring human activities, home-based rehabilitation can be provided for people suffering from traumatic brain injuries. One can find applications ranging from security-related applications and logistics support tolocation-based services.[61]Activity recognition systems have been developed forwildlife observation[62]andenergy conservationin buildings.[63]
https://en.wikipedia.org/wiki/Activity_recognition
Anadaptive neuro-fuzzy inference systemoradaptive network-based fuzzy inference system(ANFIS) is a kind ofartificial neural networkthat is based onTakagi–Sugeno fuzzy inference system. The technique was developed in the early 1990s.[1][2]Since it integrates both neural networks andfuzzy logicprinciples, it has potential to capture the benefits of both in a singleframework. Its inference system corresponds to a set of fuzzyIF–THEN rulesthat have learning capability to approximatenonlinear functions.[3]Hence, ANFIS is considered to be auniversal estimator.[4]For using the ANFIS in a more efficient and optimal way, one can use the best parameters obtained bygenetic algorithm.[5][6]It has uses in intelligent situational awareenergy management system.[7] It is possible to identify two parts in the network structure, namely premise and consequence parts. In more details, the architecture is composed by five layers. The first layer of an ANFIS network describes the difference to a vanilla neural network. Neural networks in general are operating with adata pre-processingstep, in which thefeaturesare converted into normalized values between 0 and 1. An ANFIS neural network doesn't need asigmoid function, but it's doing the preprocessing step by converting numeric values into fuzzy values.[9] Here is an example: Suppose, the network gets as input the distance between two points in the 2d space. The distance is measured in pixels and it can have values from 0 up to 500 pixels. Converting the numerical values intofuzzy numbersis done with the membership function which consists ofsemantic descriptionslike near, middle and far.[10]Each possible linguistic value is given by an individualneuron. The neuron “near” fires with a value from 0 until 1, if the distance is located within the category "near". While the neuron “middle” fires, if the distance in that category. The input value “distance in pixels” is split into three different neurons for near, middle and far.
https://en.wikipedia.org/wiki/Adaptive_neuro_fuzzy_inference_system
Adaptive resonance theory(ART) is a theory developed byStephen GrossbergandGail Carpenteron aspects of how the brainprocesses information. It describes a number ofartificial neural networkmodels which usesupervisedandunsupervised learningmethods, and address problems such aspattern recognitionand prediction. The primary intuition behind the ART model is thatobject identification and recognitiongenerally occur as a result of the interaction of 'top-down' observer expectations with 'bottom-up'sensory information. The model postulates that 'top-down' expectations take the form of a memory template orprototypethat is then compared with the actual features of an object as detected by the senses. This comparison gives rise to a measure of category belongingness. As long as this difference between sensation and expectation does not exceed a set threshold called the 'vigilance parameter', the sensed object will be considered a member of the expected class. The system thus offers a solution to the 'plasticity/stability' problem, i.e. the problem of acquiring new knowledge without disrupting existing knowledge that is also calledincremental learning. The basic ART system is anunsupervised learningmodel. It typically consists of acomparison fieldand arecognition fieldcomposed ofneurons, avigilance parameter(threshold of recognition), and areset module. There are two basic methods of training ART-based neural networks: slow and fast. In the slow learning method, the degree of training of the recognition neuron's weights towards the input vector is calculated to continuous values withdifferential equationsand is thus dependent on the length of time the input vector is presented. With fast learning,algebraic equationsare used to calculate degree of weight adjustments to be made, and binary values are used. While fast learning is effective and efficient for a variety of tasks, the slow learning method is more biologically plausible and can be used with continuous-time networks (i.e. when the input vector can vary continuously). ART 1[1][2]is the simplest variety of ART networks, accepting only binary inputs.ART 2[3]extends network capabilities to support continuous inputs.ART 2-A[4]is a streamlined form of ART-2 with a drastically accelerated runtime, and with qualitative results being only rarely inferior to the full ART-2 implementation.ART 3[5]builds on ART-2 by simulating rudimentaryneurotransmitterregulation ofsynaptic activityby incorporating simulated sodium (Na+) and calcium (Ca2+) ion concentrations into the system's equations, which results in a more physiologically realistic means of partially inhibiting categories that trigger mismatch resets. ARTMAP[6]also known asPredictive ART, combines two slightly modified ART-1 or ART-2 units into a supervised learning structure where the first unit takes the input data and the second unit takes the correct output data, then used to make the minimum possible adjustment of the vigilance parameter in the first unit in order to make the correct classification. Fuzzy ART[7]implements fuzzy logic into ART's pattern recognition, thus enhancing generalizability. An optional (and very useful) feature of fuzzy ART is complement coding, a means of incorporating the absence of features into pattern classifications, which goes a long way towards preventing inefficient and unnecessary category proliferation. The applied similarity measures are based on theL1 norm. Fuzzy ART is known to be very sensitive to noise. Fuzzy ARTMAP[8]is merely ARTMAP using fuzzy ART units, resulting in a corresponding increase in efficacy. Simplified Fuzzy ARTMAP (SFAM)[9]constitutes a strongly simplified variant of fuzzy ARTMAP dedicated toclassificationtasks. Gaussian ART[10]andGaussian ARTMAP[10]use Gaussian activation functions and computations based on probability theory. Therefore, they have some similarity with Gaussianmixture models. In comparison to fuzzy ART and fuzzy ARTMAP, they are less sensitive to noise. But the stability of learnt representations is reduced which may lead to category proliferation in open-ended learning tasks. Fusion ART and related networks[11][12][13]extend ART and ARTMAP to multiple pattern channels. They support several learning paradigms, including unsupervised learning, supervised learning and reinforcement learning. TopoART[14]combines fuzzy ART with topology learning networks such as thegrowing neural gas. Furthermore, it adds a noise reduction mechanism. There are several derived neural networks which extend TopoART to further learning paradigms. Hypersphere ART[15]andHypersphere ARTMAP[15]are closely related to fuzzy ART and fuzzy ARTMAP, respectively. But as they use a different type of category representation (namely hyperspheres), they do not require their input to be normalised to the interval [0, 1]. They apply similarity measures based on theL2 norm. LAPART[16]The Laterally Primed Adaptive Resonance Theory (LAPART) neural networks couple two Fuzzy ART algorithms to create a mechanism for making predictions based on learned associations. The coupling of the two Fuzzy ARTs has a unique stability that allows the system to converge rapidly towards a clear solution. Additionally, it can perform logical inference and supervised learning similar to fuzzy ARTMAP. It has been noted that results of Fuzzy ART and ART 1 (i.e., the learnt categories) depend critically upon the order in which the training data are processed. The effect can be reduced to some extent by using a slower learning rate, but is present regardless of the size of the input data set. Hence Fuzzy ART and ART 1 estimates do not possess the statistical property ofconsistency.[17]This problem can be considered as a side effect of the respective mechanisms ensuring stable learning in both networks. More advanced ART networks such as TopoART and Hypersphere TopoART that summarise categories to clusters may solve this problem as the shapes of the clusters do not depend on the order of creation of the associated categories. (cf. Fig. 3(g, h) and Fig. 4 of[18]) Wasserman, Philip D. (1989), Neural computing: theory and practice, New York: Van Nostrand Reinhold,ISBN0-442-20743-3
https://en.wikipedia.org/wiki/Adaptive_resonance_theory
Inprobability theoryandinformation theory,adjusted mutual information, a variation ofmutual informationmay be used for comparingclusterings.[1]It corrects the effect of agreement solely due to chance between clusterings, similar to the way theadjusted rand indexcorrects theRand index. It is closely related tovariation of information:[2]when a similar adjustment is made to the VI index, it becomes equivalent to the AMI.[1]The adjusted measure however is no longer metrical.[3] Given a setSofNelementsS={s1,s2,…sN}{\displaystyle S=\{s_{1},s_{2},\ldots s_{N}\}}, consider twopartitionsofS, namelyU={U1,U2,…,UR}{\displaystyle U=\{U_{1},U_{2},\ldots ,U_{R}\}}withRclusters, andV={V1,V2,…,VC}{\displaystyle V=\{V_{1},V_{2},\ldots ,V_{C}\}}withCclusters. It is presumed here that the partitions are so-calledhard clusters;the partitions are pairwise disjoint: for alli≠j{\displaystyle i\neq j}, and complete: Themutual informationof cluster overlap betweenUandVcan be summarized in the form of anRxCcontingency tableM=[nij]j=1…Ci=1…R{\displaystyle M=[n_{ij}]_{j=1\ldots C}^{i=1\ldots R}}, wherenij{\displaystyle n_{ij}}denotes the number of objects that are common to clustersUi{\displaystyle U_{i}}andVj{\displaystyle V_{j}}. That is, Suppose an object is picked at random fromS; the probability that the object falls into clusterUi{\displaystyle U_{i}}is: Theentropyassociated with the partitioningUis: H(U)is non-negative and takes the value 0 only when there is no uncertainty determining an object's cluster membership,i.e., when there is only one cluster. Similarly, the entropy of the clusteringVcan be calculated as: wherePV(j)=|Vj|/N{\displaystyle P_{V}(j)={|V_{j}|}/{N}}. Themutual information(MI) between two partitions: wherePUV(i,j){\displaystyle P_{UV}(i,j)}denotes the probability that a point belongs to both the clusterUi{\displaystyle U_{i}}inUand clusterVj{\displaystyle V_{j}}inV: MI is a non-negative quantity upper bounded by the entropiesH(U) andH(V). It quantifies the information shared by the two clusterings and thus can be employed as a clusteringsimilarity measure. Like theRand index, the baseline value of mutual information between two random clusterings does not take on a constant value, and tends to be larger when the two partitions have a larger number of clusters (with a fixed number of set elementsN). By adopting ahypergeometricmodel of randomness, it can be shown that the expected mutual information between two random clusterings is: where(ai+bj−N)+{\displaystyle (a_{i}+b_{j}-N)^{+}}denotesmax(0,ai+bj−N){\displaystyle \max(0,a_{i}+b_{j}-N)}. The variablesai{\displaystyle a_{i}}andbj{\displaystyle b_{j}}are partial sums of thecontingency table; that is, and The adjusted measure[1]for the mutual information may then be defined to be: The AMI takes a value of 1 when the two partitions are identical and 0 when the MI between two partitions equals the value expected due to chance alone.
https://en.wikipedia.org/wiki/Adjusted_mutual_information
AIVA(Artificial Intelligence Virtual Artist) is anelectronic composerrecognized by theSACEM. Created in February 2016, AIVA specializes inclassicalandsymphonic musiccomposition.[1][2]It became the world's first virtual composer to be recognized by a music society (SACEM).[3][4]By reading a large collection of existing works of classical music (written by human composers such asBach,Beethoven,Mozart) AIVA is capable of detecting regularities in music and on this base composing on its own.[5][6]The algorithm AIVA is based ondeep learningandreinforcement learningarchitectures.[7]Since January 2019, the company offers a commercial product, Music Engine, capable of generating short (up to 3 minutes) compositions in various styles (rock, pop, jazz, fantasy, shanty, tango, 20th century cinematic, modern cinematic, and Chinese). AIVA was presented atTED[8]by Pierre Barreau.[9] AIVA is a published composer;[10]its first studio album "Genesis" was released in November 2016.[11][12]Second album "Among the Stars" in 2018. Avignon Symphonic Orchestra [ORAP] also performed Aiva's compositions[2]in April 2017.[13][14]
https://en.wikipedia.org/wiki/AIVA
AIXI/ˈaɪksi/is a theoreticalmathematical formalismforartificial general intelligence. It combinesSolomonoff inductionwithsequential decision theory. AIXI was first proposed byMarcus Hutterin 2000[1]and several results regarding AIXI are proved in Hutter's 2005 bookUniversal Artificial Intelligence.[2] AIXI is areinforcement learning(RL) agent. It maximizes the expected total rewards received from the environment. Intuitively, it simultaneously considers every computable hypothesis (or environment). In each time step, it looks at every possible program and evaluates how many rewards that program generates depending on the next action taken. The promised rewards are then weighted by thesubjective beliefthat this program constitutes the true environment. This belief is computed from the length of the program: longer programs are considered less likely, in line withOccam's razor. AIXI then selects the action that has the highest expected total reward in the weighted sum of all these programs. According to Hutter, the word "AIXI" can have several interpretations. AIXI can stand for AI based on Solomonoff's distribution, denoted byξ{\displaystyle \xi }(which is the Greek letter xi), or e.g. it can stand for AI "crossed" (X) with induction (I). There are other interpretations.[3] AIXI is a reinforcement learning agent that interacts with some stochastic and unknown but computable environmentμ{\displaystyle \mu }. The interaction proceeds in time steps, fromt=1{\displaystyle t=1}tot=m{\displaystyle t=m}, wherem∈N{\displaystyle m\in \mathbb {N} }is the lifespan of the AIXI agent. At time stept, the agent chooses an actionat∈A{\displaystyle a_{t}\in {\mathcal {A}}}(e.g. a limb movement) and executes it in the environment, and the environment responds with a "percept"et∈E=O×R{\displaystyle e_{t}\in {\mathcal {E}}={\mathcal {O}}\times \mathbb {R} }, which consists of an "observation"ot∈O{\displaystyle o_{t}\in {\mathcal {O}}}(e.g., a camera image) and a rewardrt∈R{\displaystyle r_{t}\in \mathbb {R} }, distributed according to theconditional probabilityμ(otrt|a1o1r1...at−1ot−1rt−1at){\displaystyle \mu (o_{t}r_{t}|a_{1}o_{1}r_{1}...a_{t-1}o_{t-1}r_{t-1}a_{t})}, wherea1o1r1...at−1ot−1rt−1at{\displaystyle a_{1}o_{1}r_{1}...a_{t-1}o_{t-1}r_{t-1}a_{t}}is the "history" of actions, observations and rewards. The environmentμ{\displaystyle \mu }is thus mathematically represented as aprobability distributionover "percepts" (observations and rewards) which depend on thefullhistory, so there is noMarkov assumption(as opposed to other RL algorithms). Note again that this probability distribution isunknownto the AIXI agent. Furthermore, note again thatμ{\displaystyle \mu }is computable, that is, the observations and rewards received by the agent from the environmentμ{\displaystyle \mu }can be computed by some program (which runs on aTuring machine), given the past actions of the AIXI agent.[4] Theonlygoal of the AIXI agent is to maximize∑t=1mrt{\displaystyle \sum _{t=1}^{m}r_{t}}, that is, the sum of rewards from time step 1 to m. The AIXI agent is associated with a stochastic policyπ:(A×E)∗→A{\displaystyle \pi :({\mathcal {A}}\times {\mathcal {E}})^{*}\rightarrow {\mathcal {A}}}, which is the function it uses to choose actions at every time step, whereA{\displaystyle {\mathcal {A}}}is the space of all possible actions that AIXI can take andE{\displaystyle {\mathcal {E}}}is the space of all possible "percepts" that can be produced by the environment. The environment (or probability distribution)μ{\displaystyle \mu }can also be thought of as a stochastic policy (which is a function):μ:(A×E)∗×A→E{\displaystyle \mu :({\mathcal {A}}\times {\mathcal {E}})^{*}\times {\mathcal {A}}\rightarrow {\mathcal {E}}}, where the∗{\displaystyle *}is theKleene staroperation. In general, at time stept{\displaystyle t}(which ranges from 1 to m), AIXI, having previously executed actionsa1…at−1{\displaystyle a_{1}\dots a_{t-1}}(which is often abbreviated in the literature asa<t{\displaystyle a_{<t}}) and having observed the history of perceptso1r1...ot−1rt−1{\displaystyle o_{1}r_{1}...o_{t-1}r_{t-1}}(which can be abbreviated ase<t{\displaystyle e_{<t}}), chooses and executes in the environment the action,at{\displaystyle a_{t}}, defined as follows:[3] or, using parentheses, to disambiguate the precedences Intuitively, in the definition above, AIXI considers the sum of the total reward over all possible "futures" up tom−t{\displaystyle m-t}time steps ahead (that is, fromt{\displaystyle t}tom{\displaystyle m}), weighs each of them by the complexity of programsq{\displaystyle q}(that is, by2−length(q){\displaystyle 2^{-{\textrm {length}}(q)}}) consistent with the agent's past (that is, the previously executed actions,a<t{\displaystyle a_{<t}}, and received percepts,e<t{\displaystyle e_{<t}}) that can generate that future, and then picks the action that maximizes expected future rewards.[4] Let us break this definition down in order to attempt to fully understand it. otrt{\displaystyle o_{t}r_{t}}is the "percept" (which consists of the observationot{\displaystyle o_{t}}and rewardrt{\displaystyle r_{t}}) received by the AIXI agent at time stept{\displaystyle t}from the environment (which is unknown and stochastic). Similarly,omrm{\displaystyle o_{m}r_{m}}is the percept received by AIXI at time stepm{\displaystyle m}(the last time step where AIXI is active). rt+…+rm{\displaystyle r_{t}+\ldots +r_{m}}is the sum of rewards from time stept{\displaystyle t}to time stepm{\displaystyle m}, so AIXI needs to look into the future to choose its action at time stept{\displaystyle t}. U{\displaystyle U}denotes amonotoneuniversal Turing machine, andq{\displaystyle q}ranges over all (deterministic) programs on the universal machineU{\displaystyle U}, which receives as input the programq{\displaystyle q}and the sequence of actionsa1…am{\displaystyle a_{1}\dots a_{m}}(that is, all actions), and produces the sequence of perceptso1r1…omrm{\displaystyle o_{1}r_{1}\ldots o_{m}r_{m}}. The universal Turing machineU{\displaystyle U}is thus used to "simulate" or compute the environment responses or percepts, given the programq{\displaystyle q}(which "models" the environment) and all actions of the AIXI agent: in this sense, the environment is "computable" (as stated above). Note that, in general, the program which "models" thecurrentand actual environment (where AIXI needs to act) is unknown because the current environment is also unknown. length(q){\displaystyle {\textrm {length}}(q)}is the length of the programq{\displaystyle q}(which is encoded as a string of bits). Note that2−length(q)=12length(q){\displaystyle 2^{-{\textrm {length}}(q)}={\frac {1}{2^{{\textrm {length}}(q)}}}}. Hence, in the definition above,∑q:U(q,a1…am)=o1r1…omrm2−length(q){\displaystyle \sum _{q:\;U(q,a_{1}\ldots a_{m})=o_{1}r_{1}\ldots o_{m}r_{m}}2^{-{\textrm {length}}(q)}}should be interpreted as amixture(in this case, a sum) over all computable environments (which are consistent with the agent's past), each weighted by its complexity2−length(q){\displaystyle 2^{-{\textrm {length}}(q)}}. Note thata1…am{\displaystyle a_{1}\ldots a_{m}}can also be written asa1…at−1at…am{\displaystyle a_{1}\ldots a_{t-1}a_{t}\ldots a_{m}}, anda1…at−1=a<t{\displaystyle a_{1}\ldots a_{t-1}=a_{<t}}is the sequence of actions already executed in the environment by the AIXI agent. Similarly,o1r1…omrm=o1r1…ot−1rt−1otrt…omrm{\displaystyle o_{1}r_{1}\ldots o_{m}r_{m}=o_{1}r_{1}\ldots o_{t-1}r_{t-1}o_{t}r_{t}\ldots o_{m}r_{m}}, ando1r1…ot−1rt−1{\displaystyle o_{1}r_{1}\ldots o_{t-1}r_{t-1}}is the sequence of percepts produced by the environment so far. Let us now put all these components together in order to understand this equation or definition. At time step t, AIXI chooses the actionat{\displaystyle a_{t}}where the function∑otrt…maxam∑omrm[rt+…+rm]∑q:U(q,a1…am)=o1r1…omrm2−length(q){\displaystyle \sum _{o_{t}r_{t}}\ldots \max _{a_{m}}\sum _{o_{m}r_{m}}[r_{t}+\ldots +r_{m}]\sum _{q:\;U(q,a_{1}\ldots a_{m})=o_{1}r_{1}\ldots o_{m}r_{m}}2^{-{\textrm {length}}(q)}}attains its maximum. The parameters to AIXI are the universal Turing machineUand the agent's lifetimem, which need to be chosen. The latter parameter can be removed by the use ofdiscounting. AIXI's performance is measured by the expected total number of rewards it receives. AIXI has been proven to be optimal in the following ways.[2] It was later shown by Hutter andJan Leikethat balanced Pareto optimality is subjective and that any policy can be considered Pareto optimal, which they describe as undermining all previous optimality claims for AIXI.[5] However, AIXI does have limitations. It is restricted to maximizing rewards based on percepts as opposed to external states. It also assumes it interacts with the environment solely through action and percept channels, preventing it from considering the possibility of being damaged or modified. Colloquially, this means that it doesn't consider itself to be contained by the environment it interacts with. It also assumes the environment is computable.[6] LikeSolomonoff induction, AIXI isincomputable. However, there are computable approximations of it. One such approximation is AIXItl, which performs at least as well as the provably best timetand spacellimited agent.[2]Another approximation to AIXI with a restricted environment class is MC-AIXI (FAC-CTW) (which stands forMonte CarloAIXI FAC-Context-Tree Weighting), which has had some success playing simple games such aspartially observablePac-Man.[4][7]
https://en.wikipedia.org/wiki/AIXI
AlchemyAPIwas a software company in the field ofmachine learning. Its technology employeddeep learningfor various applications innatural language processing, such as semantic text analysis and sentiment analysis, as well ascomputer vision. AlchemyAPI offered both traditionally-licensed software products as well API access under aSoftware as a servicemodel.[1][2]After acquisition byIBMin 2015, the products were integrated into theWatsonline of products and the brand name eventually disappeared. As the name suggests, the business model of charging for access to an API was central to the company's identity and uncommon for its time: ATechCruncharticle highlighted that even though the technology was similar to IBM's Watson, the pay-per-use model made it more accessible, especially to non-enterprise customers.[2]At one point, AlchemyAPI served over 3 billion API calls per month. AlchemyAPI was founded by Elliot Turner[3]in 2005,[4]and launched their API in 2009.[2] In September 2011, ProgrammableWeb added AlchemyAPI to its API Billionaires Club, alongside giants such asGoogleandFacebook.[2][5] In February 2013, it was announced that AlchemyAPI had raised US$2 million to improve the capabilities of its deep learning technology.[2][6][7][8]In September 2013, it was reported that AlchemyAPI had created aGoogle Glassapp that could identify what a person was looking at, and that AlchemyAPI would soon be rolling out deep learning-based image recognition as a service.[9][10] As of February 2014 (prior to the IBM acquisition), it claimed to have clients in 36 countries and process over 3 billion documents a month. In May 2014, it was reported that AlchemyAPI had released a computer vision API known as AlchemyVision, capable of recognizing objects in photographs and providing image similarity search capabilities.[11] In March 2015, it was announced that AlchemyAPI had been acquired by IBM and the company's breakthroughs in deep learning would accelerate IBM's development of next generation cognitive computing applications. IBM reported plans to integrate AlchemyAPI's deep learning technology into the core Watson platform[12] A February 2013 article inVentureBeataboutbig datanamed AlchemyAPI as one of the primary forces responsible for bringing natural language processing capabilities to the masses.[13]In November 2013,GigaOmlisted AlchemyAPI as one of the top startups working indeep learning, along withCorticaand Ersatz.[14]
https://en.wikipedia.org/wiki/AlchemyAPI
Algorithm selection(sometimes also calledper-instance algorithm selectionoroffline algorithm selection) is a meta-algorithmic techniqueto choose an algorithm from a portfolio on an instance-by-instance basis. It is motivated by the observation that on many practical problems, different algorithms have different performance characteristics. That is, while one algorithm performs well in some scenarios, it performs poorly in others and vice versa for another algorithm. If we can identify when to use which algorithm, we can optimize for each scenario and improve overall performance. This is what algorithm selection aims to do. The only prerequisite for applying algorithm selection techniques is that there exists (or that there can be constructed) a set of complementary algorithms. Given a portfolioP{\displaystyle {\mathcal {P}}}of algorithmsA∈P{\displaystyle {\mathcal {A}}\in {\mathcal {P}}}, a set of instancesi∈I{\displaystyle i\in {\mathcal {I}}}and a cost metricm:P×I→R{\displaystyle m:{\mathcal {P}}\times {\mathcal {I}}\to \mathbb {R} }, the algorithm selection problem consists of finding a mappings:I→P{\displaystyle s:{\mathcal {I}}\to {\mathcal {P}}}from instancesI{\displaystyle {\mathcal {I}}}to algorithmsP{\displaystyle {\mathcal {P}}}such that the cost∑i∈Im(s(i),i){\displaystyle \sum _{i\in {\mathcal {I}}}m(s(i),i)}across all instances is optimized.[1][2] A well-known application of algorithm selection is theBoolean satisfiability problem. Here, the portfolio of algorithms is a set of (complementary)SAT solvers, the instances are Boolean formulas, the cost metric is for example average runtime or number of unsolved instances. So, the goal is to select a well-performing SAT solver for each individual instance. In the same way, algorithm selection can be applied to many otherNP{\displaystyle {\mathcal {NP}}}-hard problems (such asmixed integer programming,CSP,AI planning,TSP,MAXSAT,QBFandanswer set programming). Competition-winning systems in SAT are SATzilla,[3]3S[4]and CSHC[5] Inmachine learning, algorithm selection is better known asmeta-learning. The portfolio of algorithms consists of machine learning algorithms (e.g., Random Forest, SVM, DNN), the instances are data sets and the cost metric is for example the error rate. So, the goal is to predict which machine learning algorithm will have a small error on each data set. The algorithm selection problem is mainly solved with machine learning techniques. By representing the problem instances by numerical featuresf{\displaystyle f}, algorithm selection can be seen as amulti-class classificationproblem by learning a mappingfi↦A{\displaystyle f_{i}\mapsto {\mathcal {A}}}for a given instancei{\displaystyle i}. Instance features are numerical representations of instances. For example, we can count the number of variables, clauses, average clause length for Boolean formulas,[6]or number of samples, features, class balance for ML data sets to get an impression about their characteristics. We distinguish between two kinds of features: Depending on the used performance metricm{\displaystyle m}, feature computation can be associated with costs. For example, if we use running time as performance metric, we include the time to compute our instance features into the performance of an algorithm selection system. SAT solving is a concrete example, where such feature costs cannot be neglected, since instance features forCNFformulas can be either very cheap (e.g., to get the number of variables can be done in constant time for CNFs in the DIMACs format) or very expensive (e.g., graph features which can cost tens or hundreds of seconds). It is important to take the overhead of feature computation into account in practice in such scenarios; otherwise a misleading impression of the performance of the algorithm selection approach is created. For example, if the decision which algorithm to choose can be made with perfect accuracy, but the features are the running time of the portfolio algorithms, there is no benefit to the portfolio approach. This would not be obvious if feature costs were omitted. One of the first successful algorithm selection approaches predicted the performance of each algorithmm^A:I→R{\displaystyle {\hat {m}}_{\mathcal {A}}:{\mathcal {I}}\to \mathbb {R} }and selected the algorithm with the best predicted performanceargminA∈Pm^A(i){\displaystyle arg\min _{{\mathcal {A}}\in {\mathcal {P}}}{\hat {m}}_{\mathcal {A}}(i)}for an instancei{\displaystyle i}.[3] A common assumption is that the given set of instancesI{\displaystyle {\mathcal {I}}}can be clustered into homogeneous subsets and for each of these subsets, there is one well-performing algorithm for all instances in there. So, the training consists of identifying the homogeneous clusters via an unsupervised clustering approach and associating an algorithm with each cluster. A new instance is assigned to a cluster and the associated algorithm selected.[7] A more modern approach is cost-sensitivehierarchical clustering[5]using supervised learning to identify the homogeneous instance subsets. A common approach for multi-class classification is to learn pairwise models between every pair of classes (here algorithms) and choose the class that was predicted most often by the pairwise models. We can weight the instances of the pairwise prediction problem by the performance difference between the two algorithms. This is motivated by the fact that we care most about getting predictions with large differences correct, but the penalty for an incorrect prediction is small if there is almost no performance difference. Therefore, each instancei{\displaystyle i}for training a classification modelA1{\displaystyle {\mathcal {A}}_{1}}vsA2{\displaystyle {\mathcal {A}}_{2}}is associated with a cost|m(A1,i)−m(A2,i)|{\displaystyle |m({\mathcal {A}}_{1},i)-m({\mathcal {A}}_{2},i)|}.[8] The algorithm selection problem can be effectively applied under the following assumptions: Algorithm selection is not limited to single domains but can be applied to any kind of algorithm if the above requirements are satisfied. Application domains include: For an extensive list of literature about algorithm selection, we refer to a literature overview. Online algorithm selection refers to switching between different algorithms during the solving process. This is useful as ahyper-heuristic. In contrast, offline algorithm selection selects an algorithm for a given instance only once and before the solving process. An extension of algorithm selection is the per-instance algorithm scheduling problem, in which we do not select only one solver, but we select a time budget for each algorithm on a per-instance base. This approach improves the performance of selection systems in particular if the instance features are not very informative and a wrong selection of a single solver is likely.[11] Given the increasing importance of parallel computation, an extension of algorithm selection for parallel computation is parallel portfolio selection, in which we select a subset of the algorithms to simultaneously run in a parallel portfolio.[12]
https://en.wikipedia.org/wiki/Algorithm_selection
Algorithmic inferencegathers new developments in thestatistical inferencemethods made feasible by the powerful computing devices widely available to any data analyst. Cornerstones in this field arecomputational learning theory,granular computing,bioinformatics, and, long ago, structural probability (Fraser 1966). The main focus is on the algorithms which compute statistics rooting the study of a random phenomenon, along with the amount of data they must feed on to produce reliable results. This shifts the interest of mathematicians from the study of thedistribution lawsto the functional properties of thestatistics, and the interest of computer scientists from the algorithms for processing data to theinformationthey process. Concerning the identification of the parameters of a distribution law, the mature reader may recall lengthy disputes in the mid 20th century about the interpretation of their variability in terms offiducial distribution(Fisher 1956), structural probabilities (Fraser 1966), priors/posteriors (Ramsey 1925), and so on. From anepistemologyviewpoint, this entailed a companion dispute as to the nature ofprobability: is it a physical feature of phenomena to be described throughrandom variablesor a way of synthesizing data about a phenomenon? Opting for the latter, Fisher defines afiducial distributionlaw of parameters of a given random variable that he deduces from a sample of its specifications. With this law he computes, for instance "the probability that μ (mean of aGaussian variable– omeur note) is less than any assigned value, or the probability that it lies between any assigned values, or, in short, its probability distribution, in the light of the sample observed". Fisher fought hard to defend the difference and superiority of his notion of parameter distribution in comparison to analogous notions, such as Bayes'posterior distribution, Fraser's constructive probability and Neyman'sconfidence intervals. For half a century, Neyman's confidence intervals won out for all practical purposes, crediting the phenomenological nature of probability. With this perspective, when you deal with a Gaussian variable, its mean μ is fixed by the physical features of the phenomenon you are observing, where the observations are random operators, hence the observed values are specifications of arandom sample. Because of their randomness, you may compute from the sample specific intervals containing the fixed μ with a given probability that you denoteconfidence. LetXbe a Gaussian variable[1]with parametersμ{\displaystyle \mu }andσ2{\displaystyle \sigma ^{2}}and{X1,…,Xm}{\displaystyle \{X_{1},\ldots ,X_{m}\}}a sample drawn from it. Working with statistics and is the sample mean, we recognize that follows aStudent's t distribution(Wilks 1962) with parameter (degrees of freedom)m− 1, so that GaugingTbetween two quantiles and inverting its expression as a function ofμ{\displaystyle \mu }you obtain confidence intervals forμ{\displaystyle \mu }. With the sample specification: having sizem= 10, you compute the statisticssμ=43.37{\displaystyle s_{\mu }=43.37}andsσ2=46.07{\displaystyle s_{\sigma ^{2}}=46.07}, and obtain a 0.90 confidence interval forμ{\displaystyle \mu }with extremes (3.03, 5.65). From a modeling perspective the entire dispute looks like a chicken-egg dilemma: either fixed data by first and probability distribution of their properties as a consequence, or fixed properties by first and probability distribution of the observed data as a corollary. The classic solution has one benefit and one drawback. The former was appreciated particularly back when people still did computations with sheet and pencil. Per se, the task of computing a Neyman confidence interval for the fixed parameter θ is hard: you do not know θ, but you look for disposing around it an interval with a possibly very low probability of failing. The analytical solution is allowed for a very limited number of theoretical cases.Vice versaa large variety of instances may be quickly solved in anapproximate wayvia thecentral limit theoremin terms of confidence interval around a Gaussian distribution – that's the benefit. The drawback is that the central limit theorem is applicable when the sample size is sufficiently large. Therefore, it is less and less applicable with the sample involved in modern inference instances. The fault is not in the sample size on its own part. Rather, this size is not sufficiently large because of thecomplexityof the inference problem. With the availability of large computing facilities, scientists refocused from isolated parameters inference to complex functions inference, i.e. re sets of highly nested parameters identifying functions. In these cases we speak aboutlearning of functions(in terms for instance ofregression,neuro-fuzzy systemorcomputational learning) on the basis of highly informative samples. A first effect of having a complex structure linking data is the reduction of the number of sampledegrees of freedom, i.e. the burning of a part of sample points, so that the effective sample size to be considered in the central limit theorem is too small. Focusing on the sample size ensuring a limited learning error with a givenconfidence level, the consequence is that the lower bound on this size grows withcomplexity indicessuch asVC dimensionordetail of a classto which the function we want to learn belongs. A sample of 1,000 independent bits is enough to ensure an absolute error of at most 0.081 on the estimation of the parameterpof the underlying Bernoulli variable with a confidence of at least 0.99. The same size cannot guarantee a threshold less than 0.088 with the same confidence 0.99 when the error is identified with the probability that a 20-year-old man living in New York does not fit the ranges of height, weight and waistline observed on 1,000 Big Apple inhabitants. The accuracy shortage occurs because both the VC dimension and the detail of the class of parallelepipeds, among which the one observed from the 1,000 inhabitants' ranges falls, are equal to 6. With insufficiently large samples, the approach:fixed sample – random propertiessuggests inference procedures in three steps: a sampling mechanism(U,g(a,k)){\displaystyle (U,g_{(a,k)})}forXwith seedUreads: or, equivalently,g(a,k)(u)=ku−1/a.{\displaystyle g_{(a,k)}(u)=ku^{-1/a}.} With these relations we may inspect the values of the parameters that could have generated a sample with the observed statistic from a particular setting of the seeds representing the seed of the sample. Hence, to the population of sample seeds corresponds a population of parameters. In order to ensure this population clean properties, it is enough to draw randomly the seed values and involve eithersufficient statisticsor, simply,well-behaved statisticsw.r.t. the parameters, in the master equations. For example, the statisticss1=∑i=1mlog⁡xi{\displaystyle s_{1}=\sum _{i=1}^{m}\log x_{i}}ands2=mini=1,…,m{xi}{\displaystyle s_{2}=\min _{i=1,\ldots ,m}\{x_{i}\}}prove to be sufficient for parametersaandkof a Pareto random variableX. Thanks to the (equivalent form of the) sampling mechanismg(a,k){\displaystyle g_{(a,k)}}we may read them as respectively. wheres1{\displaystyle s_{1}}ands2{\displaystyle s_{2}}are the observed statistics andu1,…,um{\displaystyle u_{1},\ldots ,u_{m}}a set of uniform seeds. Transferring to the parameters the probability (density) affecting the seeds, you obtain the distribution law of the random parametersAandKcompatible with the statistics you have observed. Compatibility denotes parameters of compatible populations, i.e. of populations thatcould have generateda sample giving rise to the observed statistics. You may formalize this notion as follows: For a random variable and a sample drawn from it acompatible distributionis a distribution having the samesampling mechanismMX=(Z,gθ){\displaystyle {\mathcal {M}}_{X}=(Z,g_{\boldsymbol {\theta }})}ofXwith a valueθ{\displaystyle {\boldsymbol {\theta }}}of the random parameterΘ{\displaystyle \mathbf {\Theta } }derived from a master equation rooted on a well-behaved statistics. You may find the distribution law of the Pareto parametersAandKas an implementation example of thepopulation bootstrapmethod as in the figure on the left. Implementing thetwisting argumentmethod, you get the distribution lawFM(μ){\displaystyle F_{M}(\mu )}of the meanMof a Gaussian variableXon the basis of the statisticsM=∑i=1mxi{\displaystyle s_{M}=\sum _{i=1}^{m}x_{i}}whenΣ2{\displaystyle \Sigma ^{2}}is known to be equal toσ2{\displaystyle \sigma ^{2}}(Apolloni, Malchiodi & Gaito 2006). Its expression is: shown in the figure on the right, whereΦ{\displaystyle \Phi }is thecumulative distribution functionof astandard normal distribution. Computing aconfidence intervalforMgiven its distribution function is straightforward: we need only find two quantiles (for instanceδ/2{\displaystyle \delta /2}and1−δ/2{\displaystyle 1-\delta /2}quantiles in case we are interested in a confidence interval of level δ symmetric in the tail's probabilities) as indicated on the left in the diagram showing the behavior of the two bounds for different values of the statisticsm. The Achilles heel of Fisher's approach lies in the joint distribution of more than one parameter, say mean and variance of a Gaussian distribution. On the contrary, with the last approach (and above-mentioned methods:population bootstrapandtwisting argument) we may learn the joint distribution of many parameters. For instance, focusing on the distribution of two or many more parameters, in the figures below we report two confidence regions where the function to be learnt falls with a confidence of 90%. The former concerns the probability with which an extendedsupport vector machineattributes a binary label 1 to the points of the(x,y){\displaystyle (x,y)}plane. The two surfaces are drawn on the basis of a set of sample points in turn labelled according to a specific distribution law (Apolloni et al. 2008). The latter concerns the confidence region of the hazard rate of breast cancer recurrence computed from a censored sample (Apolloni, Malchiodi & Gaito 2006).
https://en.wikipedia.org/wiki/Algorithmic_inference
Algorithmic learning theoryis a mathematical framework for analyzingmachine learningproblems and algorithms. Synonyms includeformal learning theoryandalgorithmic inductive inference[citation needed]. Algorithmic learning theory is different fromstatistical learning theoryin that it does not make use of statistical assumptions and analysis. Both algorithmic and statistical learning theory are concerned with machine learning and can thus be viewed as branches ofcomputational learning theory[citation needed]. Unlike statistical learning theory and most statistical theory in general, algorithmic learning theory does not assume that data are random samples, that is, that data points are independent of each other. This makes the theory suitable for domains where observations are (relatively) noise-free but not random, such as language learning[1]and automated scientific discovery.[2][3] The fundamental concept of algorithmic learning theory is learning in the limit: as the number of data points increases, a learning algorithm should converge to a correct hypothesis oneverypossible data sequence consistent with the problem space. This is a non-probabilistic version ofstatistical consistency, which also requires convergence to a correct model in the limit, but allows a learner to fail on data sequences with probability measure 0[citation needed]. Algorithmic learning theory investigates the learning power ofTuring machines. Other frameworks consider a much more restricted class of learning algorithms than Turing machines, for example, learners that compute hypotheses more quickly, for instance inpolynomial time. An example of such a framework isprobably approximately correct learning[citation needed]. The concept was introduced inE. Mark Gold's seminal paper "Language identification in the limit".[4]The objective oflanguage identificationis for a machine running one program to be capable of developing another program by which any given sentence can be tested to determine whether it is "grammatical" or "ungrammatical". The language being learned need not beEnglishor any othernatural language- in fact the definition of "grammatical" can be absolutely anything known to the tester. In Gold's learning model, the tester gives the learner an example sentence at each step, and the learner responds with ahypothesis, which is a suggestedprogramto determine grammatical correctness. It is required of the tester that every possible sentence (grammatical or not) appears in the list eventually, but no particular order is required. It is required of the learner that at each step the hypothesis must be correct for all the sentences so far.[citation needed] A particular learner is said to be able to "learn a language in the limit" if there is a certain number of steps beyond which its hypothesis no longer changes.[citation needed]At this point it has indeed learned the language, because every possible sentence appears somewhere in the sequence of inputs (past or future), and the hypothesis is correct for all inputs (past or future), so the hypothesis is correct for every sentence. The learner is not required to be able to tell when it has reached a correct hypothesis, all that is required is that it be true. Gold showed that any language which is defined by aTuring machineprogram can be learned in the limit by anotherTuring-completemachine usingenumeration.[clarification needed]This is done by the learner testing all possible Turing machine programs in turn until one is found which is correct so far - this forms the hypothesis for the current step. Eventually, the correct program will be reached, after which the hypothesis will never change again (but note that the learner does not know that it won't need to change). Gold also showed that if the learner is given only positive examples (that is, only grammatical sentences appear in the input, not ungrammatical sentences), then the language can only be guaranteed to be learned in the limit if there are only afinitenumber of possible sentences in the language (this is possible if, for example, sentences are known to be of limited length).[clarification needed] Language identification in the limit is a highly abstract model. It does not allow for limits ofruntimeorcomputer memorywhich can occur in practice, and the enumeration method may fail if there are errors in the input. However the framework is very powerful, because if these strict conditions are maintained, it allows the learning of any program known to be computable[citation needed]. This is because a Turing machine program can be written to mimic any program in any conventionalprogramming language. SeeChurch-Turing thesis. Learning theorists have investigated other learning criteria,[5]such as the following. Mind change bounds are closely related tomistake boundsthat are studied instatistical learning theory.[7]Kevin Kelly has suggested that minimizing mind changes is closely related to choosing maximally simple hypotheses in the sense ofOccam’s Razor.[8] Since 1990, there is anInternational Conference on Algorithmic Learning Theory (ALT), calledWorkshopin its first years (1990–1997).[9]Between 1992 and 2016, proceedings were published in theLNCSseries.[10]Starting from 2017, they are published by the Proceedings of Machine Learning Research. The 34th conference will be held inSingaporein Feb 2023.[11]The topics of the conference cover all of theoretical machine learning, including statistical and computational learning theory, online learning, active learning, reinforcement learning, and deep learning.
https://en.wikipedia.org/wiki/Algorithmic_learning_theory
AlphaGois acomputer programthat plays theboard gameGo.[1]It was developed by the London-basedDeepMindTechnologies,[2]an acquired subsidiary ofGoogle. Subsequent versions of AlphaGo became increasingly powerful, including a version that competed under the nameMaster.[3]After retiring from competitive play, AlphaGo Master was succeeded by an even more powerful version known asAlphaGo Zero, which was completelyself-taughtwithout learning from human games. AlphaGo Zero was then generalized into a program known asAlphaZero, which played additional games, includingchessandshogi. AlphaZero has in turn been succeeded by a program known asMuZerowhich learns without being taught the rules. AlphaGo and its successors use aMonte Carlo tree searchalgorithm to find its moves based on knowledge previously acquired bymachine learning, specifically by anartificial neural network(adeep learningmethod) by extensive training, both from human and computer play.[4]A neural network is trained to identify the best moves and the winning percentages of these moves. This neural network improves the strength of the tree search, resulting in stronger move selection in the next iteration. In October 2015, in a match againstFan Hui, the original AlphaGo became the firstcomputer Goprogram to beat a humanprofessional Go playerwithouthandicapon a full-sized 19×19 board.[5][6]In March 2016, it beatLee Sedolina five-game match, the first time a computer Go program has beaten a9-danprofessional without handicap.[7]Although it lost to Lee Sedol in the fourth game, Lee resigned in the final game, giving a final score of 4 games to 1 in favour of AlphaGo. In recognition of the victory, AlphaGo was awarded an honorary 9-dan by theKorea Baduk Association.[8]The lead up and the challenge match with Lee Sedol were documented in a documentary film also titledAlphaGo,[9]directed by Greg Kohs. The win by AlphaGo was chosen byScienceas one of theBreakthrough of the Yearrunners-up on 22 December 2016.[10] At the 2017Future of Go Summit, theMasterversion of AlphaGo beatKe Jie, the number one ranked player in the world at the time, in athree-game match, after which AlphaGo was awarded professional 9-dan by theChinese Weiqi Association.[11] After the match between AlphaGo and Ke Jie, DeepMind retired AlphaGo, while continuing AI research in other areas.[12]The self-taught AlphaGo Zero achieved a 100–0 victory against the early competitive version of AlphaGo, and its successorAlphaZerowas perceived as the world's top player in Go by the end of the 2010s.[13][14] Go is considered much more difficult for computers to win than other games such aschess, because its strategic and aesthetic nature makes it hard to directly construct an evaluation function, and its much largerbranching factormakes it prohibitively difficult to use traditional AI methods such asalpha–beta pruning,tree traversalandheuristicsearch.[5][15] Almost two decades afterIBM's computerDeep Bluebeat world chess championGarry Kasparovin the1997 match, the strongest Go programs usingartificial intelligencetechniques only reached aboutamateur 5-danlevel,[4]and still could not beat a professional Go player without ahandicap.[5][6][16]In 2012, the software programZen, running on a four PC cluster, beatMasaki Takemiya(9p) twice at five- and four-stone handicaps.[17]In 2013,Crazy StonebeatYoshio Ishida(9p) at a four-stone handicap.[18] According to DeepMind'sDavid Silver, the AlphaGo research project was formed around 2014 to test how well a neural network usingdeep learningcan compete at Go.[19]AlphaGo represents a significant improvement over previous Go programs. In 500 games against other available Go programs, including Crazy Stone and Zen, AlphaGo running on a single computer won all but one.[20]In a similar matchup, AlphaGo running on multiple computers won all 500 games played against other Go programs, and 77% of games played against AlphaGo running on a single computer. The distributed version in October 2015 was using 1,202CPUsand 176GPUs.[4] In October 2015, the distributed version of AlphaGo defeated theEuropean Go championFan Hui,[21]a2-dan(out of 9 dan possible) professional, five to zero.[6][22]This was the first time a computer Go program had beaten a professional human player on a full-sized board without handicap.[23]The announcement of the news was delayed until 27 January 2016 to coincide with the publication of a paper in the journalNature[4]describing the algorithms used.[6] AlphaGo played South Korean professional Go playerLee Sedol, ranked 9-dan, one of the best players at Go,[16][needs update]with five games taking place at theFour Seasons HotelinSeoul, South Korea on 9, 10, 12, 13, and 15 March 2016,[24][25]which were video-streamed live.[26]Out of five games, AlphaGo won four games and Lee won the fourth game which made him recorded as the only human player who beat AlphaGo in all of its 74 official games.[27]AlphaGo ran on Google's cloud computing with its servers located in the United States.[28]The match usedChinese ruleswith a 7.5-pointkomi, and each side had two hours of thinking time plus three 60-secondbyoyomiperiods.[29]The version of AlphaGo playing against Lee used a similar amount of computing power as was used in the Fan Hui match.[30]The Economistreported that it used 1,920CPUsand 280GPUs.[31]At the time of play, Lee Sedol had the second-highest number of Go international championship victories in the world after South Korean playerLee Chang-howho kept the world championship title for 16 years.[32]Since there is no single official method ofranking in international Go, the rankings may vary among the sources. While he was ranked top sometimes, some sources ranked Lee Sedol as the fourth-best player in the world at the time.[33][34]AlphaGo was not specifically trained to face Lee nor was designed to compete with any specific human players. The first three games were won by AlphaGo following resignations by Lee.[35][36]However, Lee beat AlphaGo in the fourth game, winning by resignation at move 180. AlphaGo then continued to achieve a fourth win, winning the fifth game by resignation.[37] The prize was US$1 million. Since AlphaGo won four out of five and thus the series, the prize will be donated to charities, includingUNICEF.[38]Lee Sedol received $150,000 for participating in all five games and an additional $20,000 for his win in Game 4.[29] In June 2016, at a presentation held at a university in the Netherlands, Aja Huang, one of the Deep Mind team, revealed that they had patched the logical weakness that occurred during the 4th game of the match between AlphaGo and Lee, and that after move 78 (which was dubbed the "divine move" by many professionals), it would play as intended and maintain Black's advantage. Before move 78, AlphaGo was leading throughout the game, but Lee's move caused the program's computing powers to be diverted and confused.[39]Huang explained that AlphaGo's policy network of finding the most accurate move order and continuation did not precisely guide AlphaGo to make the correct continuation after move 78, since its value network did not determine Lee's 78th move as being the most likely, and therefore when the move was made AlphaGo could not make the right adjustment to the logical continuation.[40] On 29 December 2016, a new account on theTygemserver named "Magister" (shown as 'Magist' at the server's Chinese version) from South Korea began to play games with professional players. It changed its account name to "Master" on 30 December, then moved to the FoxGo server on 1 January 2017. On 4 January, DeepMind confirmed that the "Magister" and the "Master" were both played by an updated version of AlphaGo, calledAlphaGo Master.[41][42]As of 5 January 2017, AlphaGo Master's online record was 60 wins and 0 losses,[43]including three victories over Go's top-ranked player,Ke Jie,[44]who had been quietly briefed in advance that Master was a version of AlphaGo.[43]After losing to Master,Gu Lioffered a bounty of 100,000yuan(US$14,400) to the first human player who could defeat Master.[42]Master played at the pace of 10 games per day. Many quickly suspected it to be an AI player due to little or no resting between games. Its adversaries included many world champions such asKe Jie,Park Jeong-hwan,Yuta Iyama,Tuo Jiaxi,Mi Yuting,Shi Yue,Chen Yaoye, Li Qincheng,Gu Li,Chang Hao, Tang Weixing,Fan Tingyu,Zhou Ruiyang,Jiang Weijie,Chou Chun-hsun,Kim Ji-seok,Kang Dong-yun,Park Yeong-hun, andWon Seong-jin; national champions or world championship runners-up such asLian Xiao,Tan Xiao, Meng Tailing, Dang Yifei, Huang Yunsong,Yang Dingxin, Gu Zihao, Shin Jinseo,Cho Han-seung, and An Sungjoon. All 60 games except one were fast-paced games with three 20 or 30 secondsbyo-yomi. Master offered to extend the byo-yomi to one minute when playing withNie Weipingin consideration of his age. After winning its 59th game Master revealed itself in the chatroom to be controlled by Dr.Aja Huangof the DeepMind team,[45]then changed its nationality to the United Kingdom. After these games were completed, the co-founder ofDeepMind,Demis Hassabis, said in a tweet, "we're looking forward to playing some official, full-length games later [2017] in collaboration with Go organizations and experts".[41][42] Go experts were impressed by the program's performance and its nonhuman play style; Ke Jie stated that "After humanity spent thousands of years improving our tactics, computers tell us that humans are completely wrong... I would go as far as to say not a single human has touched the edge of the truth of Go."[43] In the Future of Go Summit held inWuzhenin May 2017,AlphaGo Masterplayed three games with Ke Jie, the world No.1 ranked player, as well as two games with several top Chinese professionals, one pair Go game and one against a collaborating team of five human players.[46] Google DeepMind offered 1.5 million dollar winner prizes for the three-game match between Ke Jie and Master while the losing side took 300,000 dollars.[47][48]Master won all three games against Ke Jie,[49][50]after which AlphaGo was awarded professional 9-dan by the Chinese Weiqi Association.[11] After winning its three-game match against Ke Jie, the top-rated world Go player, AlphaGo retired. DeepMind also disbanded the team that worked on the game to focus on AI research in other areas.[12]After the Summit, Deepmind published 50 full length AlphaGo vs AlphaGo matches, as a gift to the Go community.[51] AlphaGo's team published an article in the journalNatureon 19 October 2017, introducing AlphaGo Zero, a version without human data and stronger than any previous human-champion-defeating version.[52]By playing games against itself, AlphaGo Zero surpassed the strength ofAlphaGo Leein three days by winning 100 games to 0, reached the level ofAlphaGo Masterin 21 days, and exceeded all the old versions in 40 days.[53] In a paper released onarXivon 5 December 2017, DeepMind claimed that it generalized AlphaGo Zero's approach into a single AlphaZero algorithm, which achieved within 24 hours a superhuman level of play in the games ofchess,shogi, andGoby defeating world-champion programs,Stockfish,Elmo, and 3-day version of AlphaGo Zero in each case.[54] On 11 December 2017, DeepMind released an AlphaGo teaching tool on its website[55]to analyze winning rates of differentGo openingsas calculated byAlphaGo Master.[56]The teaching tool collects 6,000 Go openings from 230,000 human games each analyzed with 10,000,000 simulations by AlphaGo Master. Many of the openings include human move suggestions.[56] An early version of AlphaGo was tested on hardware with various numbers ofCPUsandGPUs, running in asynchronous or distributed mode. Two seconds of thinking time was given to each move. The resultingElo ratingsare listed below.[4]In the matches with more time per move higher ratings are achieved. In May 2016, Google unveiled its own proprietary hardware "tensor processing units", which it stated had already been deployed in multiple internal projects at Google, including the AlphaGo match against Lee Sedol.[57][58] In theFuture of Go Summitin May 2017, DeepMind disclosed that the version of AlphaGo used in this Summit wasAlphaGo Master,[59][60]and revealed that it had measured the strength of different versions of the software. AlphaGo Lee, the version used against Lee, could give AlphaGo Fan, the version used in AlphaGo vs. Fan Hui, three stones, and AlphaGo Master was even three stones stronger.[61] 89:11 against AlphaGo Master As of 2016, AlphaGo's algorithm uses a combination ofmachine learningandtree searchtechniques, combined with extensive training, both from human and computer play. It usesMonte Carlo tree search, guided by a "value network" and a "policy network", both implemented usingdeep neural networktechnology.[5][4]A limited amount of game-specific feature detection pre-processing (for example, to highlight whether a move matches anakadepattern) is applied to the input before it is sent to the neural networks.[4]The networks areconvolutional neural networkswith 12 layers, trained byreinforcement learning.[4] The system's neural networks were initially bootstrapped from human gameplay expertise. AlphaGo was initially trained to mimic human play by attempting to match the moves of expert players from recorded historical games, using a database of around 30 million moves.[21]Once it had reached a certain degree of proficiency, it was trained further by being set to play large numbers of games against other instances of itself, usingreinforcement learningto improve its play.[5]To avoid "disrespectfully" wasting its opponent's time, the program is specifically programmed to resign if its assessment of win probability falls beneath a certain threshold; for the match against Lee, the resignation threshold was set to 20%.[64] Toby Manning, the match referee for AlphaGo vs. Fan Hui, has described the program's style as "conservative".[65]AlphaGo's playing style strongly favours greater probability of winning by fewer points over lesser probability of winning by more points.[19]Its strategy of maximising its probability of winning is distinct from what human players tend to do which is to maximise territorial gains, and explains some of its odd-looking moves.[66]It makes a lot of opening moves that have never or seldom been made by humans. It likes to useshoulder hits, especially if the opponent is over concentrated.[67] AlphaGo's March 2016 victory was a major milestone in artificial intelligence research.[68]Go had previously been regarded as a hard problem in machine learning that was expected to be out of reach for the technology of the time.[68][69][70]Most experts thought a Go program as powerful as AlphaGo was at least five years away;[71]some experts thought that it would take at least another decade before computers would beat Go champions.[4][72][73]Most observers at the beginning of the 2016 matches expected Lee to beat AlphaGo.[68] With games such as checkers (that has beensolvedby theChinook computer engine), chess, and now Go won by computers, victories at popular board games can no longer serve as major milestones for artificial intelligence in the way that they used to.Deep Blue'sMurray Campbellcalled AlphaGo's victory "the end of an era... board games are more or less done and it's time to move on."[68] When compared with Deep Blue orWatson, AlphaGo's underlying algorithms are potentially more general-purpose and may be evidence that the scientific community is making progress towardsartificial general intelligence.[19][74]Some commentators believe AlphaGo's victory makes for a good opportunity for society to start preparing for the possible future impact ofmachines with general purpose intelligence. As noted by entrepreneur Guy Suter, AlphaGo only knows how to play Go and doesn't possess general-purpose intelligence; "[It] couldn't just wake up one morning and decide it wants to learn how to use firearms."[68]AI researcherStuart Russellsaid that AI systems such as AlphaGo have progressed quicker and become more powerful than expected, and we must therefore develop methods to ensure they "remain under human control".[75]Some scholars, such asStephen Hawking, warned (in May 2015 before the matches) that some future self-improving AI could gain actual general intelligence, leading to an unexpectedAI takeover; other scholars disagree: AI expert Jean-Gabriel Ganascia believes that "Things like 'common sense'... may never be reproducible",[76]and says "I don't see why we would speak about fears. On the contrary, this raises hopes in many domains such as health and space exploration."[75]Computer scientistRichard Suttonsaid "I don't think people should be scared... but I do think people should be paying attention."[77] In China, AlphaGo was a "Sputnik moment" which helped convince the Chinese government to prioritize and dramatically increase funding for artificial intelligence.[78] In 2017, the DeepMind AlphaGo team received the inauguralIJCAIMarvin Minskymedal for Outstanding Achievements in AI. "AlphaGo is a wonderful achievement, and a perfect example of what the Minsky Medal was initiated to recognise", said ProfessorMichael Wooldridge, Chair of the IJCAI Awards Committee. "What particularly impressed IJCAI was that AlphaGo achieves what it does through a brilliant combination of classic AI techniques as well as the state-of-the-art machine learning techniques that DeepMind is so closely associated with. It's a breathtaking demonstration of contemporary AI, and we are delighted to be able to recognise it with this award."[79] Go is a popular game in China, Japan and Korea, and the 2016 matches were watched by perhaps a hundred million people worldwide.[68][80]Many top Go players characterized AlphaGo's unorthodox plays as seemingly-questionable moves that initially befuddled onlookers, but made sense in hindsight:[72]"All but the very best Go players craft their style by imitating top players. AlphaGo seems to have totally original moves it creates itself."[68]AlphaGo appeared to have unexpectedly become much stronger, even when compared with its October 2015 match[81]where a computer had beaten a Go professional for the first time ever without the advantage of a handicap.[82]The day after Lee's first defeat, Jeong Ahram, the lead Go correspondent for one of South Korea's biggest daily newspapers, said "Last night was very gloomy... Many people drank alcohol."[83]TheKorea Baduk Association, the organization that oversees Go professionals in South Korea, awarded AlphaGo an honorary 9-dan title for exhibiting creative skills and pushing forward the game's progress.[84] China'sKe Jie, an 18-year-old generally recognized as the world's best Go player at the time,[33][85]initially claimed that he would be able to beat AlphaGo, but declined to play against it for fear that it would "copy my style".[85]As the matches progressed, Ke Jie went back and forth, stating that "it is highly likely that I (could) lose" after analysing the first three matches,[86]but regaining confidence after AlphaGo displayed flaws in the fourth match.[87] Toby Manning, the referee of AlphaGo's match against Fan Hui, and Hajin Lee, secretary general of theInternational Go Federation, both reason that in the future, Go players will get help from computers to learn what they have done wrong in games and improve their skills.[82] After game two, Lee said he felt "speechless": "From the very beginning of the match, I could never manage an upper hand for one single move. It was AlphaGo's total victory."[88]Lee apologized for his losses, stating after game three that "I misjudged the capabilities of AlphaGo and felt powerless."[68]He emphasized that the defeat was "Lee Se-dol's defeat" and "not a defeat of mankind".[27][76]Lee said his eventual loss to a machine was "inevitable" but stated that "robots will never understand the beauty of the game the same way that we humans do."[76]Lee called his game four victory a "priceless win that I (would) not exchange for anything."[27] OnRotten Tomatoesthe documentary has an average rating of 100% from 10 reviews.[89] Michael Rechtshaffen of theLos Angeles Timesgave the documentary a positive review and said: "It helps matters when you have a group of engaging human subjects like soft-spoken Sedol, who's as intensively contemplative as the game itself, contrasted by the spirited, personable Fan Hui, the Paris-based European champ who accepts an offer to serve as an advisor for the DeepMind team after suffering a demoralizing AI trouncing". He also mentioned that with the passion of Hauschka's Volker Bertelmann, the film's producer, this documentary shows many unexpected sequences, including strategic and philosophical components.[90](Rechtshaffen, 2017 John Defore ofThe Hollywood Reporter, wrote this documentary is "an involving sports-rivalry doc with an AI twist." "In the end, observers wonder if AlphaGo's odd variety of intuition might not kill Go as an intellectual pursuit but shift its course, forcing the game's scholars to consider it from new angles. So maybe it isn't time to welcome our computer overlords, and won't be for a while - maybe they'll teach us to be better thinkers before turning us into their slaves."[91] Greg Kohs, the director of the film, said "The complexity of the game of Go, combined with the technical depth of an emerging technology like artificial intelligence seemed like it might create an insurmountable barrier for a film like this. The fact that I was so innocently unaware of Go and AlphaGo actually proved to be beneficial. It allowed me to approach the action and interviews with pure curiosity, the kind that helps make any subject matter emotionally accessible." Kohs also said that "Unlike the film's human characters – who turn their curious quest for knowledge into an epic spectacle with great existential implications, who dare to risk their reputation and pride to contest that curiosity – AI might not yet possess the ability to empathize. But it can teach us profound things about our humanness – the way we play board games, the way we think and feel and grow. It's a deep, vast premise, but my hope is, by sharing it, we can discover something within ourselves we never saw before".[92] Hajin Lee, a former professional Go player, described this documentary as being "beautifully filmed". In addition to the story itself, the feelings and atmosphere were also conveyed through different scene arrangements. For example, the close-up shots of Lee Sedol when he realizes that the AlphaGo AI is intelligent, the atmospheric scene of the Korean commentator's distress and affliction following the first defeat, and the tension being held inside the room. The documentary also tells a story by describing the background of AlphaGo technology and the customs of the Korean Go community. She suggests some areas to be covered additionally. For instance, the details of the AI prior to AlphaGo, the confidence and pride of the professional Go players, and the shifting of perspective to the Go AI between and after the match as "If anything could be added, I would include information about the primitive level of top Go A.I.s before AlphaGo, and more about professional Go players' lives and pride, to provide more context for Lee Sedol's pre-match confidence, and Go players' changing perception of AlphaGo as the match advanced".[93](Lee, 2017). Fan Hui, a professional Go player, and former player with AlphaGo said that "DeepMind had trained AlphaGo by showing it many strong amateur games of Go to develop its understanding of how a human plays before challenging it to play versions of itself thousands of times, a novel form of reinforcement learning which had given it the ability to rival an expert human. History had been made, and centuries of received learning overturned in the process. The program was free to learn the game for itself.[94] James Vincent, a reporter from The Verge, comments that "It prods and pokes viewers with unsubtle emotional cues, like a reality TV show would. "Now, you should be nervous; now you should feel relieved". The AlphaGo footage slowly captures the moment when Lee Sedol acknowledges the true power of AlphaGo AI. In the first game, he had more experience than his human-programmed AI, so he thought it would be easy to beat the AI. However, the early game dynamics were not what he expected. After losing the first match, he became more nervous and lost confidence. Afterward, he reacted to attacks by saying that he just wanted to win the match, unintentionally displaying his anger, and acting in an unusual way. Also, he spends 12 minutes on one move, while AlphaGo only takes a minute and a half to respond. AlphaGo weighs each alternative equally and consistently. No reaction to Lee's fight. Instead, the game continues as if he was not there. James also said that "suffice to say that humanity does land at least one blow on the machines, through Lee's so-called "divine move". "More likely, the forces of automation we'll face will be impersonal and incomprehensible. They'll come in the form of star ratings we can't object to, and algorithms we can't fully understand. Dealing with the problems of AI will take a perspective that looks beyond individual battles. AlphaGo is worth seeing because it raises these questions"[95](Vincent, 2017) Murray Shanahan, a professor of cognitive robotics at Imperial College London, critics that "Go is an extraordinary game but it represents what we can do with AI in all kinds of other spheres," says Murray Shanahan, professor of cognitive robotics at Imperial College London and senior research scientist at DeepMind, says. "In just the same way there are all kinds of realms of possibility within Go that have not been discovered, we could never have imagined the potential for discovering drugs and other materials."[94] Facebookhas also been working on its own Go-playing systemdarkforest, also based on combining machine learning andMonte Carlo tree search.[65][96]Although a strong player against other computer Go programs, as of early 2016, it had not yet defeated a professional human player.[97]Darkforest has lost to CrazyStone and Zen and is estimated to be of similar strength to CrazyStone and Zen.[98] DeepZenGo, a system developed with support from video-sharing websiteDwangoand theUniversity of Tokyo, lost 2–1 in November 2016 to Go masterCho Chikun, who holds the record for the largest number of Go title wins in Japan.[99][100] A 2018 paper inNaturecited AlphaGo's approach as the basis for a new means of computing potential pharmaceutical drug molecules.[101][102]Systems consisting ofMonte Carlo tree searchguided by neural networks have since been explored for a wide array of applications.[103] AlphaGo Master(white) v.Tang Weixing(31 December 2016), AlphaGo won by resignation. White 36 was widely praised. The documentary filmAlphaGo[9][89]raised hopes thatLee SedolandFan Huiwould have benefitted from their experience of playing AlphaGo, but as of May 2018[update], their ratings were little changed;Lee Sedolwas ranked 11th in the world, andFan Hui545th.[104]On 19 November 2019, Lee announced his retirement from professional play, arguing that he could never be the top overall player of Go due to the increasing dominance of AI. Lee referred to them as being "an entity that cannot be defeated".[105]
https://en.wikipedia.org/wiki/AlphaGo
AlphaGo Zerois a version ofDeepMind'sGo softwareAlphaGo. AlphaGo's team published an article inNaturein October 2017 introducing AlphaGo Zero, a version created without using data from human games, and stronger than any previous version.[1]By playing games against itself, AlphaGo Zero: surpassed the strength ofAlphaGo Leein three days by winning 100 games to 0; reached the level ofAlphaGo Masterin 21 days; and exceeded all previous versions in 40 days.[2] Trainingartificial intelligence(AI) withoutdatasetsderived fromhuman expertshas significant implications for the development of AI with superhuman skills, as expert data is "often expensive, unreliable, or simply unavailable."[3]Demis Hassabis, the co-founder and CEO of DeepMind, said that AlphaGo Zero was so powerful because it was "no longer constrained by the limits of human knowledge".[4]Furthermore, AlphaGo Zero performed better than standarddeep reinforcement learningmodels (such asDeep Q-Networkimplementations[5]) due to its integration ofMonte Carlo tree search.David Silver, one of the first authors of DeepMind's papers published inNatureon AlphaGo, said that it is possible to have generalized AI algorithms by removing the need to learn from humans.[6] Google later developedAlphaZero, a generalized version of AlphaGo Zero that could playchessandShōgiin addition to Go.[7]In December 2017, AlphaZero beat the 3-day version of AlphaGo Zero by winning 60 games to 40, and with 8 hours of training it outperformed AlphaGo Lee on anElo scale. AlphaZero also defeated a top chess program (Stockfish) and a top Shōgi program (Elmo).[8][9] The network in AlphaGo Zero is aResNetwith two heads.[1]: Appendix: Methods AlphaGo Zero's neural network was trained usingTensorFlow, with 64 GPU workers and 19 CPU parameter servers. Only fourTPUswere used for inference. Theneural networkinitially knew nothing aboutGobeyond therules. Unlike earlier versions of AlphaGo, Zero only perceived the board's stones, rather than having some rare human-programmed edge cases to help recognize unusual Go board positions. The AI engaged inreinforcement learning, playing against itself until it could anticipate its own moves and how those moves would affect the game's outcome.[10]In the first three days AlphaGo Zero played 4.9 million games against itself in quick succession.[11]It appeared to develop the skills required to beat top humans within just a few days, whereas the earlier AlphaGo took months of training to achieve the same level.[12] Training cost 3e23 FLOPs, ten times that of AlphaZero.[13] For comparison, the researchers also trained a version of AlphaGo Zero using human games, AlphaGo Master, and found that it learned more quickly, but actually performed more poorly in the long run.[14]DeepMind submitted its initial findings in a paper toNaturein April 2017, which was then published in October 2017.[1] The hardware cost for a single AlphaGo Zero system in 2017, including the four TPUs, has been quoted as around $25 million.[15] According to Hassabis, AlphaGo's algorithms are likely to be of the most benefit to domains that require an intelligent search through an enormous space of possibilities, such asprotein folding(seeAlphaFold) or accurately simulating chemical reactions.[16]AlphaGo's techniques are probably less useful in domains that are difficult to simulate, such as learning how to drive a car.[17]DeepMind stated in October 2017 that it had already started active work on attempting to use AlphaGo Zero technology for protein folding, and stated it would soon publish new findings.[18][19] AlphaGo Zero was widely regarded as a significant advance, even when compared with its groundbreaking predecessor, AlphaGo.Oren Etzioniof theAllen Institute for Artificial Intelligencecalled AlphaGo Zero "a very impressive technical result" in "both their ability to do it—and their ability to train the system in 40 days, on four TPUs".[10]The Guardiancalled it a "major breakthrough for artificial intelligence", citing Eleni Vasilaki ofSheffield Universityand Tom Mitchell ofCarnegie Mellon University, who called it an impressive feat and an “outstanding engineering accomplishment" respectively.[17]Mark Pesceof the University of Sydney called AlphaGo Zero "a big technological advance" taking us into "undiscovered territory".[20] Gary Marcus, a psychologist atNew York University, has cautioned that for all we know, AlphaGo may contain "implicit knowledge that the programmers have about how to construct machines to play problems like Go" and will need to be tested in other domains before being sure that its base architecture is effective at much more than playing Go. In contrast, DeepMind is "confident that this approach is generalisable to a large number of domains".[11] In response to the reports, South Korean Go professionalLee Sedolsaid, "The previous version of AlphaGo wasn’t perfect, and I believe that’s why AlphaGo Zero was made." On the potential for AlphaGo's development, Lee said he will have to wait and see but also said it will affect young Go players.Mok Jin-seok, who directs the South Korean national Go team, said the Go world has already been imitating the playing styles of previous versions of AlphaGo and creating new ideas from them, and he is hopeful that new ideas will come out from AlphaGo Zero. Mok also added that general trends in the Go world are now being influenced by AlphaGo's playing style. "At first, it was hard to understand and I almost felt like I was playing against an alien. However, having had a great amount of experience, I’ve become used to it," Mok said. "We are now past the point where we debate the gap between the capability of AlphaGo and humans. It’s now between computers." Mok has reportedly already begun analyzing the playing style of AlphaGo Zero along with players from the national team. "Though having watched only a few matches, we received the impression that AlphaGo Zero plays more like a human than its predecessors," Mok said.[21]Chinese Go professionalKe Jiecommented on the remarkable accomplishments of the new program: "A pure self-learning AlphaGo is the strongest. Humans seem redundant in front of its self-improvement."[22] Future of Go Summit 89:11 against AlphaGo Master On 5 December 2017, DeepMind team released a preprint onarXiv, introducing AlphaZero, a program using generalized AlphaGo Zero's approach, which achieved within 24 hours a superhuman level of play inchess,shogi, andGo, defeating world-champion programs,Stockfish,Elmo, and 3-day version of AlphaGo Zero in each case.[8] AlphaZero (AZ) is a more generalized variant of the AlphaGo Zero (AGZ)algorithm, and is able to play shogi and chess as well as Go. Differences between AZ and AGZ include:[8] Anopen sourceprogram,Leela Zero, based on the ideas from the AlphaGo papers is available. It uses aGPUinstead of theTPUsrecent versions of AlphaGo rely on.
https://en.wikipedia.org/wiki/AlphaGo_Zero
Inartificial intelligence,apprenticeship learning(orlearning from demonstrationorimitation learning) is the process of learning by observing an expert.[1][2]It can be viewed as a form ofsupervised learning, where the training dataset consists of task executions by a demonstration teacher.[2] Mapping methods try to mimic the expert by forming a direct mapping either from states to actions,[2]or from states to reward values.[1]For example, in 2002 researchers used such an approach to teach an AIBO robot basic soccer skills.[2] Inverse reinforcement learning(IRL) is the process of deriving a reward function from observed behavior. While ordinary "reinforcement learning" involves using rewards and punishments to learn behavior, in IRL the direction is reversed, and a robot observes a person's behavior to figure out what goal that behavior seems to be trying to achieve.[3]The IRL problem can be defined as:[4] Given 1) measurements of an agent's behaviour over time, in a variety of circumstances; 2) measurements of the sensory inputs to that agent; 3) a model of the physical environment (including the agent's body): Determine the reward function that the agent is optimizing. IRL researcherStuart J. Russellproposes that IRL might be used to observe humans and attempt to codify their complex "ethical values", in an effort to create "ethical robots" that might someday know "not to cook your cat" without needing to be explicitly told.[5]The scenario can be modeled as a "cooperative inverse reinforcement learning game", where a "person" player and a "robot" player cooperate to secure the person's implicit goals, despite these goals not being explicitly known by either the person nor the robot.[6][7] In 2017,OpenAIandDeepMindapplieddeep learningto the cooperative inverse reinforcement learning in simple domains such as Atari games and straightforward robot tasks such as backflips. The human role was limited to answering queries from the robot as to which of two different actions were preferred. The researchers found evidence that the techniques may be economically scalable to modern systems.[8][9] Apprenticeship via inverse reinforcement learning(AIRP) was developed by in 2004Pieter Abbeel, Professor inBerkeley'sEECSdepartment, andAndrew Ng, Associate Professor inStanford University's Computer Science Department. AIRP deals with "Markov decision processwhere we are not explicitly given a reward function, but where instead we can observe an expert demonstrating the task that we want to learn to perform".[1]AIRP has been used to model reward functions of highly dynamic scenarios where there is no obvious reward function intuitively. Take the task of driving for example, there are many different objectives working simultaneously - such as maintaining safe following distance, a good speed, not changing lanes too often, etc. This task, may seem easy at first glance, but a trivial reward function may not converge to the policy wanted. One domain where AIRP has been used extensively is helicopter control. While simple trajectories can be intuitively derived, complicated tasks likeaerobaticsfor shows has been successful. These includeaerobatic maneuverslike - in-place flips, in-place rolls, loops, hurricanes and even auto-rotation landings. This work was developed by Pieter Abbeel, Adam Coates, and Andrew Ng - "Autonomous Helicopter Aerobatics through Apprenticeship Learning"[10] System models try to mimic the expert by modeling world dynamics.[2] The system learns rules to associate preconditions and postconditions with each action. In one 1994 demonstration, a humanoid learns a generalized plan from only two demonstrations of a repetitive ball collection task.[2] Learning from demonstration is often explained from a perspective that the workingRobot-control-systemis available and the human-demonstrator is using it. And indeed, if the software works, theHuman operatortakes the robot-arm, makes a move with it, and the robot will reproduce the action later. For example, he teaches the robot-arm how to put a cup under a coffeemaker and press the start-button. In the replay phase, the robot is imitating this behavior 1:1. But that is not how the system works internally; it is only what the audience can observe. In reality, Learning from demonstration is much more complex. One of the first works on learning by robot apprentices (anthropomorphic robots learning by imitation) was Adrian Stoica's PhD thesis in 1995.[11] In 1997, robotics expertStefan Schaalwas working on theSarcosrobot-arm. The goal was simple: solve thependulum swingup task. The robot itself can execute a movement, and as a result, the pendulum is moving. The problem is, that it is unclear what actions will result into which movement. It is anOptimal control-problem which can be described with mathematical formulas but is hard to solve. The idea from Schaal was, not to use aBrute-force solverbut record the movements of a human-demonstration. The angle of the pendulum is logged over three seconds at the y-axis. This results into a diagram which produces a pattern.[12] In computer animation, the principle is calledspline animation.[13]That means, on the x-axis the time is given, for example 0.5 seconds, 1.0 seconds, 1.5 seconds, while on the y-axis is the variable given. In most cases it's the position of an object. In the inverted pendulum it is the angle. The overall task consists of two parts: recording the angle over time and reproducing the recorded motion. The reproducing step is surprisingly simple. As an input we know, in which time step which angle the pendulum must have. Bringing the system to a state is called “Tracking control” orPID control. That means, we have a trajectory over time, and must find control actions to map the system to this trajectory. Other authors call the principle “steering behavior”,[14]because the aim is to bring a robot to a given line.
https://en.wikipedia.org/wiki/Apprenticeship_learning
TheMarkov condition, sometimes called theMarkov assumption, is an assumption made inBayesian probability theory, that every node in aBayesian networkisconditionally independentof its nondescendants, given its parents. Stated loosely, it is assumed that a node has no bearing on nodes which do not descend from it. In aDAG, this local Markov condition is equivalent to the global Markov condition, which states thatd-separationsin the graph also correspond to conditional independence relations.[1][2]This also means that a node is conditionally independent of the entire network, given itsMarkov blanket. The relatedCausal Markov (CM) conditionstates that, conditional on the set of all its direct causes, a node is independent of all variables which are not effects or direct causes of that node.[3]In the event that the structure of a Bayesian network accurately depictscausality, the two conditions are equivalent. However, a network may accurately embody the Markov condition without depicting causality, in which case it should not be assumed to embody the causal Markov condition. Statisticians are enormously interested in the ways in which certain events and variables are connected. The precise notion of what constitutes a cause and effect is necessary to understand the connections between them. The central idea behind the philosophical study of probabilistic causation is that causes raise the probabilities of their effects,all else being equal. Adeterministicinterpretation of causation means that ifAcausesB, thenAmustalwaysbe followed byB. In this sense, smoking does not cause cancer because some smokers never develop cancer. On the other hand, aprobabilisticinterpretation simply means that causes raise the probability of their effects. In this sense, changes in meteorological readings associated with a storm do cause that storm, since they raise its probability. (However, simply looking at a barometer does not change the probability of the storm, for a more detailed analysis, see:[4]). It follows from the definition that ifXandYare inVand are probabilistically dependent, then eitherXcausesY,YcausesX, orXandYare both effects of some common causeZinV.[3]This definition was seminally introduced by Hans Reichenbach as the Common Cause Principle (CCP)[5] It once again follows from the definition that the parents ofXscreenXfrom other "indirect causes" ofX(parents of Parents(X)) and other effects of Parents(X) which are not also effects ofX.[3] In a simple view, releasing one's hand from a hammer causes the hammer to fall. However, doing so in outer space does not produce the same outcome, calling into question if releasing one's fingers from a hammeralwayscauses it to fall. A causal graph could be created to acknowledge that both the presence of gravity and the release of the hammer contribute to its falling. However, it would be very surprising if the surface underneath the hammer affected its falling. This essentially states the Causal Markov Condition, that given the existence of gravity the release of the hammer, it will fall regardless of what is beneath it.
https://en.wikipedia.org/wiki/Causal_Markov_condition
Competitive learningis a form ofunsupervised learninginartificial neural networks, in which nodes compete for the right to respond to a subset of the input data.[1][2]A variant ofHebbian learning, competitive learning works by increasing the specialization of each node in the network. It is well suited to findingclusterswithin data. Models and algorithms based on the principle of competitive learning includevector quantizationandself-organizing maps(Kohonen maps). There are three basic elements to a competitive learning rule:[3][4] Accordingly, the individual neurons of the network learn to specialize on ensembles of similar patterns and in so doing become 'feature detectors' for different classes of input patterns. The fact that competitive networks recode sets of correlated inputs to one of a few output neurons essentially removes the redundancy in representation which is an essential part of processing in biologicalsensory systems.[5][6] Competitive Learning is usually implemented with Neural Networks that contain a hidden layer which is commonly known as “competitive layer”.[7]Every competitive neuron is described by a vector of weightswi=(wi1,..,wid)T,i=1,..,M{\displaystyle {\mathbf {w} }_{i}=\left({w_{i1},..,w_{id}}\right)^{T},i=1,..,M}and calculates thesimilarity measurebetween the input dataxn=(xn1,..,xnd)T∈Rd{\displaystyle {\mathbf {x} }^{n}=\left({x_{n1},..,x_{nd}}\right)^{T}\in \mathbb {R} ^{d}}and the weight vectorwi{\displaystyle {\mathbf {w} }_{i}}. For every input vector, the competitive neurons “compete” with each other to see which one of them is the most similar to that particular input vector. The winner neuron m sets its outputom=1{\displaystyle o_{m}=1}and all the other competitive neurons set their outputoi=0,i=1,..,M,i≠m{\displaystyle o_{i}=0,i=1,..,M,i\neq m}. Usually, in order to measure similarity the inverse of the Euclidean distance is used:‖x−wi‖{\displaystyle \left\|{{\mathbf {x} }-{\mathbf {w} }_{i}}\right\|}between the input vectorxn{\displaystyle {\mathbf {x} }^{n}}and the weight vectorwi{\displaystyle {\mathbf {w} }_{i}}. Here is a simple competitive learning algorithm to find three clusters within some input data. 1. (Set-up.) Let a set of sensors all feed into three different nodes, so that every node is connected to every sensor. Let the weights that each node gives to its sensors be set randomly between 0.0 and 1.0. Let the output of each node be the sum of all its sensors, each sensor's signal strength being multiplied by its weight. 2. When the net is shown an input, the node with the highest output is deemed the winner. The input is classified as being within the cluster corresponding to that node. 3. The winner updates each of its weights, moving weight from the connections that gave it weaker signals to the connections that gave it stronger signals. Thus, as more data are received, each node converges on the centre of the cluster that it has come to represent and activates more strongly for inputs in this cluster and more weakly for inputs in other clusters.
https://en.wikipedia.org/wiki/Competitive_learning
Concept learning, also known ascategory learning,concept attainment, andconcept formation, is defined byBruner, Goodnow, & Austin (1956) as "the search for and testing of attributes that can be used to distinguish exemplars from non exemplars of various categories".[a]More simply put, concepts are the mental categories that help us classify objects, events, or ideas, building on the understanding that each object, event, or idea has a set of common relevant features. Thus, concept learning is a strategy which requires a learner to compare and contrast groups or categories that contain concept-relevant features with groups or categories that do not contain concept-relevant features. The concept of concept attainment requires the following five categories: In a concept learning task, a human classifies objects by being shown a set of example objects along with their class labels. The learner simplifies what has been observed by condensing it in the form of an example. This simplified version of what has been learned is then applied to future examples. Concept learning may be simple or complex because learning takes place over many areas. When a concept is difficult, it is less likely that the learner will be able to simplify, and therefore will be less likely to learn. Colloquially, the task is known aslearning from examples.Most theories of concept learning are basedon the storage of exemplarsand avoid summarization or overt abstraction of any kind. Inmachine learning, this theory can be applied in training computer programs.[2] Concept learning must be distinguished from learning by reciting something from memory (recall) or discriminating between two things that differ (discrimination). However, these issues are closely related, since memory recall of facts could be considered a "trivial" conceptual process where prior exemplars representing the concept are invariant. Similarly, while discrimination is not the same as initial concept learning, discrimination processes are involved in refining concepts by means of the repeated presentation of exemplars. Concept attainment is rooted in inductive learning. So, when designing a curriculum or learning through this method, comparing like and unlike examples are key in defining the characteristics of a topic.[3] Concrete concepts are objects that can be perceived by personal sensations and perceptions. These are objects like chairs and dogs where personal interactions occur with them and create a concept.[4]Concepts become more concrete as the word we use to associate with it has a perceivable entity.[5]According to Paivio’sdual-coding theory, concrete concepts are the one that is remembered easier from their perceptual memory codes.[6]Evidence has shown that when words are heard they are associated with a concrete concept and are re-enact any previous interaction with the word within the sensorimotor system.[7]Examples of concrete concepts in learning are early educational math concepts like adding and subtracting. Abstract concepts are words and ideas that deal with emotions, personality traits and events.[8]Terms likefantasyorcoldhave a more abstract concept within them. Every person has their personal definition, which is ever-changing and comparing, of abstract concepts. For example, cold could mean the physical temperature of the surrounding area, or it could define the action and personality of another person. However, within concrete concepts there is still a level of abstractness; concrete and abstract concepts can be seen on a scale. Some ideas like chair and dog are more cut and dry in their perceptions but concepts like cold and fantasy can be seen in a more obscure way. Examples of abstract concept learning are topics like religion and ethics. Abstract-concept learning is seeing the comparison of the stimuli based on a rule (e.g., identity, difference, oddity, greater than, addition, subtraction) and when it is a novel stimulus.[9]Abstract-concept learning has three criteria to rule out any alternative explanations to define the novelty of the stimuli. One transfer stimuli has to be novel to the individual. This means it needs to be a new stimulus to the individual. Two, there is no replication of the transfer stimuli. Third and lastly, to have a full abstract learning experience, there has to be an equal amount of baseline performance and transfer performance.[9] Binder, Westbury, McKiernan, Possing, and Medler (2005)[10]used fMRI to scan individuals' brains as they made lexical decisions on abstract and concrete concepts. Abstract concepts elicited greater activation in the left precentral gyrus, left inferior frontal gyrus and sulcus, and left superior temporal gyrus, whereas concrete concepts elicited greater activation in bilateral angular gyri, the right middle temporal gyrus, the left middle frontal gyrus, bilateral posterior cingulate gyri, and bilateral precunei. In 1986Allan Paivio[11]hypothesized thedual-coding theory, which states that both verbal and visual information is used to represent information. When thinking of the conceptdog, thoughts of both the word dog and an image of a dog occur. Dual-coding theory assumes that abstract concepts involve the verbal semantic system and concrete concepts are additionally involved with the visual imaginary system. Relational and associated concepts are words, ideas and thoughts that are connected in some form. For relational concepts they are connected in a universal definition. Common relational terms are up-down, left-right, and food-dinner. These ideas are learned in our early childhood and are important for children to understand.[12]These concepts are integral within our understanding and reasoning in conservation tasks.[13]Relational terms that are verbs and prepositions have a large influence on how objects are understood. These terms are more likely to create a larger understanding of the object and they are able to cross over to other languages.[14] Associated concepts are connected by the individual’s past and own perception. Associative concept learning (also called functional concept learning) involves categorizing stimuli based on a common response or outcome regardless of perceptual similarity into appropriate categories.[15]This is associating these thoughts and ideas with other thoughts and ideas that are understood by a few or the individual. An example of this is in elementary school when learning the direction of the compass North, East, South and West. Teacher have used “Never Eat Soggy Waffles”, “Never Eat Sour Worms” and students were able to create their own version to help them learn the directions.[16] Constructs such as aschemaand a script are examples of complex concepts. A schema is an organization of smaller concepts (or features) and is revised by situational information to assist in comprehension. A script on the other hand is a list of actions that a person follows in order to complete a desired goal. An example of a script would be the process of buying a CD. There are several actions that must occur before the actual act of purchasing the CD and a script provides a sequence of the necessary actions and proper order of these actions in order to be successful in purchasing the CD. Concept attainment for in education and learning is an active learning method. Therefore, learning plans, methods, and goals can be chosen to implement concept attainment. David Perkin's Work on Knowledge as Design, Perkin's 4 Questions outline learning plan questions:[17] 1) What are the critical attributes of the concept? 2) What are the purposes of the concept? 3) What model cases of the concept? 4) What are the arguments for learning the concept?[17] Concept learning has been historically studied with deep influences from goals and functions that concepts are assumed to have. Research has investigated how function of concepts influences the learning process, which focuses on the external function. Focusing on different models for concept attainment research would expand studies in this field. When reading articles and studies, noticing potential bias and qualifying the resource is required in this topic.[18][19] In general, the theoretical issues underlying concept learning for machine learning are those underlyinginduction. These issues are addressed in many diverse publications, including literature on subjects likeVersion Spaces,Statistical Learning Theory,PAC Learning,Information Theory, andAlgorithmic Information Theory. Some of the broad theoretical ideas are also discussed by Watanabe (1969, 1985), Solomonoff (1964a, 1964b), and Rendell (1986); see the reference list below. It is difficult to make any general statements about human (or animal) concept learning without already assuming a particular psychological theory of concept learning. Although the classical views ofconceptsand concept learning in philosophy speak of a process ofabstraction,data compression, simplification, and summarization, currently popular psychological theories of concept learning diverge on all these basic points. The history of psychology has seen the rise and fall of many theories about concept learning.Classical conditioning(as defined byPavlov) created the earliest experimental technique.Reinforcement learningas described byWatsonand elaborated byClark Hullcreated a lasting paradigm inbehavioral psychology.Cognitive psychologyemphasized a computer and information flow metaphor for concept formation.Neural networkmodels of concept formation and the structure of knowledge have opened powerful hierarchical models of knowledge organization such asGeorge Miller'sWordnet. Neural networks are based on computational models of learning usingfactor analysisorconvolution. Neural networks also are open toneuroscienceandpsychophysiologicalmodels of learning followingKarl LashleyandDonald Hebb. Rule-based theories of concept learning began withcognitive psychologyand early computer models of learning that might be implemented in a high level computer language with computational statements such asif:thenproduction rules. They take classification data and a rule-based theory as input which are the result of a rule-based learner with the hopes of producing a more accurate model of the data.[20]The majority of rule-based models that have been developed are heuristic, meaning that rational analyses have not been provided and the models are not related to statistical approaches to induction. A rational analysis for rule-based models could presume that concepts are represented as rules, and would then ask to what degree of belief a rational agent should be in agreement with each rule, with some observed examples provided.[21]Rule-based theories of concept learning are focused more so onperceptual learningand less on definition learning. Rules can be used in learning when the stimuli are confusable, as opposed to simple. When rules are used in learning, decisions are made based on properties alone and rely on simple criteria that do not require a lot of memory.[22] Example of rule-based theory: "A radiologist using rule-based categorization would observe whether specific properties of an X-ray image meet certain criteria; for example, is there an extreme difference in brightness in a suspicious region relative to other regions? A decision is then based on this property alone."[22] Theprototype view of concept learningholds that people abstract out the central tendency (or prototype) of the examples experienced and use this as a basis for their categorization decisions. The prototype view of concept learning holds that people categorize based on one or more central examples of a given category followed by a penumbra of decreasingly typical examples. This implies that people do not categorize based on a list of things that all correspond to a definition, but rather on a hierarchical inventory based on semantic similarity to the central example(s). Exemplar theoryis the storage of specific instances (exemplars), with new objects evaluated only with respect to how closely they resemble specific known members (and nonmembers) of the category. This theory hypothesizes that learners store examplesverbatim. This theory views concept learning as highly simplistic. Only individual properties are represented. These individual properties are not abstract and they do not create rules. An example of what exemplar theory might look like is "water is wet". It is simply known that some (or one, or all) stored examples of water have the property wet. Exemplar based theories have become more empirically popular over the years with some evidence suggesting that human learners use exemplar based strategies only in early learning, forming prototypes and generalizations later in life. An important result of exemplar models in psychology literature has been a de-emphasis of complexity in concept learning. One of the best known exemplar theories of concept learning is the generalized context model (GCM). A problem with exemplar theory is that exemplar models critically depend on two measures: similarity between exemplars, and having a rule to determine group membership. Sometimes it is difficult to attain or distinguish these measures. More recently, cognitive psychologists have begun to explore the idea that the prototype and exemplar models form two extremes. It has been suggested that people are able to form a multiple prototype representation, besides the two extreme representations. For example, consider the category 'spoon'. There are two distinct subgroups or conceptual clusters: spoons tend to be either large and wooden, or small and made of metal. The prototypical spoon would then be a medium-size object made of a mixture of metal and wood, which is clearly an unrealistic proposal. A more natural representation of the category 'spoon' would instead consist of multiple (at least two) prototypes, one for each cluster. A number of different proposals have been made in this regard (Anderson, 1991; Griffiths, Canini, Sanborn & Navarro, 2007; Love, Medin & Gureckis, 2004; Vanpaemel & Storms, 2008). These models can be regarded as providing a compromise between exemplar and prototype models. The basic idea ofexplanation-based learningsuggests that a new concept is acquired by experiencing examples of it and forming a basic outline.1Put simply, by observing or receiving the qualities of a thing the mind forms a concept which possesses and is identified by those qualities. The original theory, proposed by Mitchell, Keller, and Kedar-Cabelli in 1986 and called explanation-based generalization, is that learning occurs through progressive generalizing.2This theory was first developed to program machines to learn. When applied to human cognition, it translates as follows: the mind actively separates information that applies to more than one thing and enters it into a broader description of a category of things. This is done by identifying sufficient conditions for something to fit in a category, similar to schematizing. The revised model revolves around the integration of four mental processes – generalization, chunking, operationalization, and analogy3. This particular theory of concept learning is relatively new and more research is being conducted to test it. Taking a mathematical approach to concept learning, Bayesian theories propose that the human mind producesprobabilitiesfor a certain concept definition, based on examples it has seen of that concept.[23]The Bayesian concept ofPrior Probabilitystops being overly specific, while thelikelihoodof a hypothesis ensures the definition is not too broad. If, say, a child is shown three horses by a parent and told these are called "horses" – she needs to work out exactly what the adult means by this word. She is much more likely to define the wordhorsesas referring to either thistype of animalorall animals, rather than an oddly specific example like "all horses except Clydesdales", which would be an unnatural concept. Meanwhile, the likelihood ofhorsesmeaning 'all animals' when the three animals shown are all very similar is low. The hypothesis that the wordhorserefers to allanimals of this speciesis most likely of the three possible definitions, as it has both a reasonable prior probability and likelihood given examples. Bayes' theoremis important because it provides a powerful tool for understanding, manipulating and controlling data5that takes a larger view that is not limited to data analysis alone6. The approach is subjective, and this requires the assessment of prior probabilities6, making it also very complex. However, if Bayesians show that the accumulated evidence and the application of Bayes' law are sufficient, the work will overcome the subjectivity of the inputs involved7. Bayesian inference can be used for any honestly collected data and has a major advantage because of its scientific focus6. One model that incorporates the Bayesian theory of concept learning is theACT-Rmodel, developed byJohn R. Anderson.[citation needed]The ACT-R model is a programming language that defines the basic cognitive and perceptual operations that enable the human mind by producing a step-by-step simulation of human behavior. This theory exploits the idea that each task humans perform consists of a series of discrete operations. The model has been applied to learning and memory, higher level cognition, natural language, perception and attention, human-computer interaction, education, and computer generated forces.[citation needed] In addition to John R. Anderson,Joshua Tenenbaumhas been a contributor to the field of concept learning; he studied the computational basis of human learning and inference using behavioral testing of adults, children, and machines from Bayesian statistics and probability theory, but also from geometry, graph theory, and linear algebra. Tenenbaum is working to achieve a better understanding of human learning in computational terms and trying to build computational systems that come closer to the capacities of human learners. M. D. Merrill's component display theory (CDT) is a cognitive matrix that focuses on the interaction between two dimensions: the level of performance expected from the learner and the types of content of the material to be learned. Merrill classifies a learner's level of performance as: find, use, remember, and material content as: facts, concepts, procedures, and principles. The theory also calls upon four primary presentation forms and several other secondary presentation forms. The primary presentation forms include: rules, examples, recall, and practice. Secondary presentation forms include: prerequisites, objectives, helps, mnemonics, and feedback. A complete lesson includes a combination of primary and secondary presentation forms, but the most effective combination varies from learner to learner and also from concept to concept. Another significant aspect of the CDT model is that it allows for the learner to control the instructional strategies used and adapt them to meet his or her own learning style and preference. A major goal of this model was to reduce three common errors in concept formation: over-generalization, under-generalization and misconception.
https://en.wikipedia.org/wiki/Concept_learning
Thedistributional learning theoryorlearning of probability distributionis a framework incomputational learning theory. It has been proposed fromMichael Kearns,Yishay Mansour,Dana Ron,Ronitt Rubinfeld,Robert SchapireandLinda Selliein 1994[1]and it was inspired from thePAC-frameworkintroduced byLeslie Valiant.[2] In this framework the input is a number of samples drawn from a distribution that belongs to a specific class of distributions. The goal is to find an efficient algorithm that, based on these samples, determines with high probability the distribution from which the samples have been drawn. Because of its generality, this framework has been used in a large variety of different fields likemachine learning,approximation algorithms,applied probabilityandstatistics. This article explains the basic definitions, tools and results in this framework from the theory of computation point of view. LetX{\displaystyle \textstyle X}be the support of the distributions of interest. As in the original work of Kearns et al.[1]ifX{\displaystyle \textstyle X}is finite it can be assumed without loss of generality thatX={0,1}n{\displaystyle \textstyle X=\{0,1\}^{n}}wheren{\displaystyle \textstyle n}is the number of bits that have to be used in order to represent anyy∈X{\displaystyle \textstyle y\in X}. We focus in probability distributions overX{\displaystyle \textstyle X}. There are two possible representations of a probability distributionD{\displaystyle \textstyle D}overX{\displaystyle \textstyle X}. A distributionD{\displaystyle \textstyle D}is called to have a polynomial generator (respectively evaluator) if its generator (respectively evaluator) exists and can be computed in polynomial time. LetCX{\displaystyle \textstyle C_{X}}a class of distribution over X, that isCX{\displaystyle \textstyle C_{X}}is a set such that everyD∈CX{\displaystyle \textstyle D\in C_{X}}is a probability distribution with supportX{\displaystyle \textstyle X}. TheCX{\displaystyle \textstyle C_{X}}can also be written asC{\displaystyle \textstyle C}for simplicity. Before defining learnability, it is necessary to define good approximations of a distributionD{\displaystyle \textstyle D}. There are several ways to measure the distance between two distribution. The three more common possibilities are The strongest of these distances is theKullback-Leibler divergenceand the weakest is theKolmogorov distance. This means that for any pair of distributionsD{\displaystyle \textstyle D},D′{\displaystyle \textstyle D'}: Therefore, for example ifD{\displaystyle \textstyle D}andD′{\displaystyle \textstyle D'}are close with respect toKullback-Leibler divergencethen they are also close with respect to all the other distances. Next definitions hold for all the distances and therefore the symbold(D,D′){\displaystyle \textstyle d(D,D')}denotes the distance between the distributionD{\displaystyle \textstyle D}and the distributionD′{\displaystyle \textstyle D'}using one of the distances that we describe above. Although learnability of a class of distributions can be defined using any of these distances, applications refer to a specific distance. The basic input that we use in order to learn a distribution is a number of samples drawn by this distribution. For the computational point of view the assumption is that such a sample is given in a constant amount of time. So it's like having access to an oracleGEN(D){\displaystyle \textstyle GEN(D)}that returns a sample from the distributionD{\displaystyle \textstyle D}. Sometimes the interest is, apart from measuring the time complexity, to measure the number of samples that have to be used in order to learn a specific distributionD{\displaystyle \textstyle D}in class of distributionsC{\displaystyle \textstyle C}. This quantity is calledsample complexityof the learning algorithm. In order for the problem of distribution learning to be more clear consider the problem of supervised learning as defined in.[3]In this framework ofstatistical learning theorya training setS={(x1,y1),…,(xn,yn)}{\displaystyle \textstyle S=\{(x_{1},y_{1}),\dots ,(x_{n},y_{n})\}}and the goal is to find a target functionf:X→Y{\displaystyle \textstyle f:X\rightarrow Y}that minimizes some loss function, e.g. the square loss function. More formallyf=arg⁡ming∫V(y,g(x))dρ(x,y){\displaystyle f=\arg \min _{g}\int V(y,g(x))d\rho (x,y)}, whereV(⋅,⋅){\displaystyle V(\cdot ,\cdot )}is the loss function, e.g.V(y,z)=(y−z)2{\displaystyle V(y,z)=(y-z)^{2}}andρ(x,y){\displaystyle \rho (x,y)}the probability distribution according to which the elements of the training set are sampled. If theconditional probability distributionρx(y){\displaystyle \rho _{x}(y)}is known then the target function has the closed formf(x)=∫yydρx(y){\displaystyle f(x)=\int _{y}yd\rho _{x}(y)}. So the setS{\displaystyle S}is a set of samples from theprobability distributionρ(x,y){\displaystyle \rho (x,y)}. Now the goal of distributional learning theory if to findρ{\displaystyle \rho }givenS{\displaystyle S}which can be used to find the target functionf{\displaystyle f}. Definition of learnability A class of distributionsC{\displaystyle \textstyle C}is calledefficiently learnableif for everyϵ>0{\displaystyle \textstyle \epsilon >0}and0<δ≤1{\displaystyle \textstyle 0<\delta \leq 1}given access toGEN(D){\displaystyle \textstyle GEN(D)}for an unknown distributionD∈C{\displaystyle \textstyle D\in C}, there exists a polynomial time algorithmA{\displaystyle \textstyle A}, called learning algorithm ofC{\displaystyle \textstyle C}, that outputs a generator or an evaluator of a distributionD′{\displaystyle \textstyle D'}such that If we know thatD′∈C{\displaystyle \textstyle D'\in C}thenA{\displaystyle \textstyle A}is calledproper learning algorithm, otherwise is calledimproper learning algorithm. In some settings the class of distributionsC{\displaystyle \textstyle C}is a class with well known distributions which can be described by a set of parameters. For instanceC{\displaystyle \textstyle C}could be the class of all the Gaussian distributionsN(μ,σ2){\displaystyle \textstyle N(\mu ,\sigma ^{2})}. In this case the algorithmA{\displaystyle \textstyle A}should be able to estimate the parametersμ,σ{\displaystyle \textstyle \mu ,\sigma }. In this caseA{\displaystyle \textstyle A}is calledparameter learning algorithm. Obviously the parameter learning for simple distributions is a very well studied field that is called statistical estimation and there is a very long bibliography on different estimators for different kinds of simple known distributions. But distributions learning theory deals with learning class of distributions that have more complicated description. In their seminal work, Kearns et al. deal with the case whereA{\displaystyle \textstyle A}is described in term of a finite polynomial sized circuit and they proved the following for some specific classes of distribution.[1] One very common technique in order to find a learning algorithm for a class of distributionsC{\displaystyle \textstyle C}is to first find a smallϵ−{\displaystyle \textstyle \epsilon -}cover ofC{\displaystyle \textstyle C}. Definition A setCϵ{\displaystyle \textstyle C_{\epsilon }}is calledϵ{\displaystyle \textstyle \epsilon }-cover ofC{\displaystyle \textstyle C}if for everyD∈C{\displaystyle \textstyle D\in C}there is aD′∈Cϵ{\displaystyle \textstyle D'\in C_{\epsilon }}such thatd(D,D′)≤ϵ{\displaystyle \textstyle d(D,D')\leq \epsilon }. Anϵ−{\displaystyle \textstyle \epsilon -}cover is small if it has polynomial size with respect to the parameters that describeD{\displaystyle \textstyle D}. Once there is an efficient procedure that for everyϵ>0{\displaystyle \textstyle \epsilon >0}finds a smallϵ−{\displaystyle \textstyle \epsilon -}coverCϵ{\displaystyle \textstyle C_{\epsilon }}of C then the only left task is to select fromCϵ{\displaystyle \textstyle C_{\epsilon }}the distributionD′∈Cϵ{\displaystyle \textstyle D'\in C_{\epsilon }}that is closer to the distributionD∈C{\displaystyle \textstyle D\in C}that has to be learned. The problem is that givenD′,D″∈Cϵ{\displaystyle \textstyle D',D''\in C_{\epsilon }}it is not trivial how we can compared(D,D′){\displaystyle \textstyle d(D,D')}andd(D,D″){\displaystyle \textstyle d(D,D'')}in order to decide which one is the closest toD{\displaystyle \textstyle D}, becauseD{\displaystyle \textstyle D}is unknown. Therefore, the samples fromD{\displaystyle \textstyle D}have to be used to do these comparisons. Obviously the result of the comparison always has a probability of error. So the task is similar with finding the minimum in a set of element using noisy comparisons. There are a lot of classical algorithms in order to achieve this goal. The most recent one which achieves the best guarantees was proposed byDaskalakisandKamath[4]This algorithm sets up a fast tournament between the elements ofCϵ{\displaystyle \textstyle C_{\epsilon }}where the winnerD∗{\displaystyle \textstyle D^{*}}of this tournament is the element which isϵ−{\displaystyle \textstyle \epsilon -}close toD{\displaystyle \textstyle D}(i.e.d(D∗,D)≤ϵ{\displaystyle \textstyle d(D^{*},D)\leq \epsilon }) with probability at least1−δ{\displaystyle \textstyle 1-\delta }. In order to do so their algorithm usesO(log⁡N/ϵ2){\displaystyle \textstyle O(\log N/\epsilon ^{2})}samples fromD{\displaystyle \textstyle D}and runs inO(Nlog⁡N/ϵ2){\displaystyle \textstyle O(N\log N/\epsilon ^{2})}time, whereN=|Cϵ|{\displaystyle \textstyle N=|C_{\epsilon }|}. Learning of simple well known distributions is a well studied field and there are a lot of estimators that can be used. One more complicated class of distributions is the distribution of a sum of variables that follow simple distributions. These learning procedure have a close relation with limit theorems like the central limit theorem because they tend to examine the same object when the sum tends to an infinite sum. Recently there are two results that described here include the learning Poisson binomial distributions and learning sums of independent integer random variables. All the results below hold using thetotal variationdistance as a distance measure. Considern{\displaystyle \textstyle n}independent Bernoulli random variablesX1,…,Xn{\displaystyle \textstyle X_{1},\dots ,X_{n}}with probabilities of successp1,…,pn{\displaystyle \textstyle p_{1},\dots ,p_{n}}. A Poisson Binomial Distribution of ordern{\displaystyle \textstyle n}is the distribution of the sumX=∑iXi{\displaystyle \textstyle X=\sum _{i}X_{i}}. For learning the classPBD={D:Dis a Poisson binomial distribution}{\displaystyle \textstyle PBD=\{D:D~{\text{ is a Poisson binomial distribution}}\}}. The first of the following results deals with the case of improper learning ofPBD{\displaystyle \textstyle PBD}and the second with the proper learning ofPBD{\displaystyle \textstyle PBD}.[5] Theorem LetD∈PBD{\displaystyle \textstyle D\in PBD}then there is an algorithm which givenn{\displaystyle \textstyle n},ϵ>0{\displaystyle \textstyle \epsilon >0},0<δ≤1{\displaystyle \textstyle 0<\delta \leq 1}and access toGEN(D){\displaystyle \textstyle GEN(D)}finds aD′{\displaystyle \textstyle D'}such thatPr[d(D,D′)≤ϵ]≥1−δ{\displaystyle \textstyle \Pr[d(D,D')\leq \epsilon ]\geq 1-\delta }. The sample complexity of this algorithm isO~((1/ϵ3)log⁡(1/δ)){\displaystyle \textstyle {\tilde {O}}((1/\epsilon ^{3})\log(1/\delta ))}and the running time isO~((1/ϵ3)log⁡nlog2⁡(1/δ)){\displaystyle \textstyle {\tilde {O}}((1/\epsilon ^{3})\log n\log ^{2}(1/\delta ))}. Theorem LetD∈PBD{\displaystyle \textstyle D\in PBD}then there is an algorithm which givenn{\displaystyle \textstyle n},ϵ>0{\displaystyle \textstyle \epsilon >0},0<δ≤1{\displaystyle \textstyle 0<\delta \leq 1}and access toGEN(D){\displaystyle \textstyle GEN(D)}finds aD′∈PBD{\displaystyle \textstyle D'\in PBD}such thatPr[d(D,D′)≤ϵ]≥1−δ{\displaystyle \textstyle \Pr[d(D,D')\leq \epsilon ]\geq 1-\delta }. The sample complexity of this algorithm isO~((1/ϵ2))log⁡(1/δ){\displaystyle \textstyle {\tilde {O}}((1/\epsilon ^{2}))\log(1/\delta )}and the running time is(1/ϵ)O(log2⁡(1/ϵ))O~(log⁡nlog⁡(1/δ)){\displaystyle \textstyle (1/\epsilon )^{O(\log ^{2}(1/\epsilon ))}{\tilde {O}}(\log n\log(1/\delta ))}. One part of the above results is that the sample complexity of the learning algorithm doesn't depend onn{\displaystyle \textstyle n}, although the description ofD{\displaystyle \textstyle D}is linear inn{\displaystyle \textstyle n}. Also the second result is almost optimal with respect to the sample complexity because there is also a lower bound ofO(1/ϵ2){\displaystyle \textstyle O(1/\epsilon ^{2})}. The proof uses a smallϵ−{\displaystyle \textstyle \epsilon -}cover ofPBD{\displaystyle \textstyle PBD}that has been produced by Daskalakis and Papadimitriou,[6]in order to get this algorithm. Considern{\displaystyle \textstyle n}independent random variablesX1,…,Xn{\displaystyle \textstyle X_{1},\dots ,X_{n}}each of which follows an arbitrary distribution with support{0,1,…,k−1}{\displaystyle \textstyle \{0,1,\dots ,k-1\}}. Ak−{\displaystyle \textstyle k-}sum of independent integer random variable of ordern{\displaystyle \textstyle n}is the distribution of the sumX=∑iXi{\displaystyle \textstyle X=\sum _{i}X_{i}}. For learning the class k−SIIRV={D:Dis a k-sum of independent integer random variable}{\displaystyle \textstyle k-SIIRV=\{D:D{\text{is a k-sum of independent integer random variable }}\}} there is the following result Theorem LetD∈k−SIIRV{\displaystyle \textstyle D\in k-SIIRV}then there is an algorithm which givenn{\displaystyle \textstyle n},ϵ>0{\displaystyle \textstyle \epsilon >0}and access toGEN(D){\displaystyle \textstyle GEN(D)}finds aD′{\displaystyle \textstyle D'}such thatPr[d(D,D′)≤ϵ]≥1−δ{\displaystyle \textstyle \Pr[d(D,D')\leq \epsilon ]\geq 1-\delta }. The sample complexity of this algorithm ispoly(k/ϵ){\displaystyle \textstyle {\text{poly}}(k/\epsilon )}and the running time is alsopoly(k/ϵ){\displaystyle \textstyle {\text{poly}}(k/\epsilon )}. Another part is that the sample and the time complexity does not depend onn{\displaystyle \textstyle n}. Its possible to conclude this independence for the previous section if we setk=2{\displaystyle \textstyle k=2}.[7] Let the random variablesX∼N(μ1,Σ1){\displaystyle \textstyle X\sim N(\mu _{1},\Sigma _{1})}andY∼N(μ2,Σ2){\displaystyle \textstyle Y\sim N(\mu _{2},\Sigma _{2})}. Define the random variableZ{\displaystyle \textstyle Z}which takes the same value asX{\displaystyle \textstyle X}with probabilityw1{\displaystyle \textstyle w_{1}}and the same value asY{\displaystyle \textstyle Y}with probabilityw2=1−w1{\displaystyle \textstyle w_{2}=1-w_{1}}. Then ifF1{\displaystyle \textstyle F_{1}}is the density ofX{\displaystyle \textstyle X}andF2{\displaystyle \textstyle F_{2}}is the density ofY{\displaystyle \textstyle Y}the density ofZ{\displaystyle \textstyle Z}isF=w1F1+w2F2{\displaystyle \textstyle F=w_{1}F_{1}+w_{2}F_{2}}. In this caseZ{\displaystyle \textstyle Z}is said to follow a mixture of Gaussians. Pearson[8]was the first who introduced the notion of the mixtures of Gaussians in his attempt to explain the probability distribution from which he got same data that he wanted to analyze. So after doing a lot of calculations by hand, he finally fitted his data to a mixture of Gaussians. The learning task in this case is to determine the parameters of the mixturew1,w2,μ1,μ2,Σ1,Σ2{\displaystyle \textstyle w_{1},w_{2},\mu _{1},\mu _{2},\Sigma _{1},\Sigma _{2}}. The first attempt to solve this problem was fromDasgupta.[9]In this workDasguptaassumes that the two means of the Gaussians are far enough from each other. This means that there is a lower bound on the distance||μ1−μ2||{\displaystyle \textstyle ||\mu _{1}-\mu _{2}||}. Using this assumption Dasgupta and a lot of scientists after him were able to learn the parameters of the mixture. The learning procedure starts withclusteringthe samples into two different clusters minimizing some metric. Using the assumption that the means of the Gaussians are far away from each other with high probability the samples in the first cluster correspond to samples from the first Gaussian and the samples in the second cluster to samples from the second one. Now that the samples are partitioned theμi,Σi{\displaystyle \textstyle \mu _{i},\Sigma _{i}}can be computed from simple statistical estimators andwi{\displaystyle \textstyle w_{i}}by comparing the magnitude of the clusters. IfGM{\displaystyle \textstyle GM}is the set of all the mixtures of two Gaussians, using the above procedure theorems like the following can be proved. Theorem[9] LetD∈GM{\displaystyle \textstyle D\in GM}with||μ1−μ2||≥cnmax(λmax(Σ1),λmax(Σ2)){\displaystyle \textstyle ||\mu _{1}-\mu _{2}||\geq c{\sqrt {n\max(\lambda _{max}(\Sigma _{1}),\lambda _{max}(\Sigma _{2}))}}}, wherec>1/2{\displaystyle \textstyle c>1/2}andλmax(A){\displaystyle \textstyle \lambda _{max}(A)}the largest eigenvalue ofA{\displaystyle \textstyle A}, then there is an algorithm which givenϵ>0{\displaystyle \textstyle \epsilon >0},0<δ≤1{\displaystyle \textstyle 0<\delta \leq 1}and access toGEN(D){\displaystyle \textstyle GEN(D)}finds an approximationwi′,μi′,Σi′{\displaystyle \textstyle w'_{i},\mu '_{i},\Sigma '_{i}}of the parameters such thatPr[||wi−wi′||≤ϵ]≥1−δ{\displaystyle \textstyle \Pr[||w_{i}-w'_{i}||\leq \epsilon ]\geq 1-\delta }(respectively forμi{\displaystyle \textstyle \mu _{i}}andΣi{\displaystyle \textstyle \Sigma _{i}}. The sample complexity of this algorithm isM=2O(log2⁡(1/(ϵδ))){\displaystyle \textstyle M=2^{O(\log ^{2}(1/(\epsilon \delta )))}}and the running time isO(M2d+Mdn){\displaystyle \textstyle O(M^{2}d+Mdn)}. The above result could also be generalized ink−{\displaystyle \textstyle k-}mixture of Gaussians.[9] For the case of mixture of two Gaussians there are learning results without the assumption of the distance between their means, like the following one which uses the total variation distance as a distance measure. Theorem[10] LetF∈GM{\displaystyle \textstyle F\in GM}then there is an algorithm which givenϵ>0{\displaystyle \textstyle \epsilon >0},0<δ≤1{\displaystyle \textstyle 0<\delta \leq 1}and access toGEN(D){\displaystyle \textstyle GEN(D)}findswi′,μi′,Σi′{\displaystyle \textstyle w'_{i},\mu '_{i},\Sigma '_{i}}such that ifF′=w1′F1′+w2′F2′{\displaystyle \textstyle F'=w'_{1}F'_{1}+w'_{2}F'_{2}}, whereFi′=N(μi′,Σi′){\displaystyle \textstyle F'_{i}=N(\mu '_{i},\Sigma '_{i})}thenPr[d(F,F′)≤ϵ]≥1−δ{\displaystyle \textstyle \Pr[d(F,F')\leq \epsilon ]\geq 1-\delta }. The sample complexity and the running time of this algorithm ispoly(n,1/ϵ,1/δ,1/w1,1/w2,1/d(F1,F2)){\displaystyle \textstyle {\text{poly}}(n,1/\epsilon ,1/\delta ,1/w_{1},1/w_{2},1/d(F_{1},F_{2}))}. The distance betweenF1{\displaystyle \textstyle F_{1}}andF2{\displaystyle \textstyle F_{2}}doesn't affect the quality of the result of the algorithm but just the sample complexity and the running time.[9][10]
https://en.wikipedia.org/wiki/Distribution_learning_theory
Inartificial intelligence,eager learningis a learning method in which the system tries to construct a general, input-independent target function during training of the system, as opposed tolazy learning, where generalization beyond the training data is delayed until a query is made to the system.[1]The main advantage gained in employing an eager learning method, such as anartificial neural network, is that the target function will be approximated globally during training, thus requiring much less space than using a lazy learning system. Eager learning systems also deal much better with noise in thetraining data. Eager learning is an example ofoffline learning, in which post-training queries to the system have no effect on the system itself, and thus the same query to the system will always produce the same result. The main disadvantage with eager learning is that it is generally unable to provide good local approximations in the target function.[2] Thisartificial intelligence-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Eager_learning
Deep reinforcement learning(DRL) is a subfield ofmachine learningthat combines principles ofreinforcement learning(RL) anddeep learning. It involves training agents to make decisions by interacting with an environment to maximize cumulative rewards, while usingdeep neural networksto represent policies, value functions, or environment models. This integration enables DRL systems to process high-dimensional inputs, such as images or continuous control signals, making the approach effective for solving complex tasks. Since the introduction of thedeep Q-network (DQN)in 2015, DRL has achieved significant successes across domains includinggames,robotics, andautonomous systems, and is increasingly applied in areas such as healthcare, finance, and autonomous vehicles. Deep reinforcement learning (DRL)is part ofmachine learning, which combinesreinforcement learning(RL) anddeep learning. In DRL, agents learn how decisions are to be made by interacting with environments in order to maximize cumulative rewards, while usingdeep neural networksto represent policies, value functions, or models of the environment. This integration enables agents to handle high-dimensional input spaces, such as raw images or continuous control signals, making DRL a widely used approach for addressing complex tasks.[1] Since the development of thedeep Q-network (DQN)in 2015, DRL has led to major breakthroughs in domains such asgames,robotics, andautonomous systems. Research in DRL continues to expand rapidly, with active work on challenges like sample efficiency and robustness, as well as innovations in model-based methods, transformer architectures, and open-ended learning. Applications now range from healthcare and finance to language systems and autonomous vehicles.[2] Reinforcement learning (RL) is a framework in which agents interact with environments by taking actions and learning from feedback in form of rewards or penalties. Traditional RL methods, such asQ-learningand policy gradient techniques, rely on tabular representations or linear approximations, which are often not scalable to high-dimensional or continuous input spaces. DRL came out as solution to above limitation by integrating RL anddeep neural networks. This combination enables agents to approximate complex functions and handle unstructured input data like raw images, sensor data, or natural language. The approach became widely recognized following the success of DeepMind's deep Q-network (DQN), which achieved human-level performance on several Atari video games using only pixel inputs and game scores as feedback.[3] Since then, DRL has evolved to include various architectures and learning strategies, including model-based methods, actor-critic frameworks, and applications in continuous control environments.[4]These developments have significantly expanded the applicability of DRL across domains where traditional RL was limited. Several algorithmic approaches form the foundation of deep reinforcement learning, each with different strategies for learning optimal behavior. One of the earliest and most influential DRL algorithms is the Deep Q-Network (DQN), which combines Q-learning with deep neural networks. DQN approximates the optimal action-value function using a convolutional neural network and introduced techniques such as experience replay and target networks which stabilize training.[5] Other methods include multi-agent reinforcement learning, hierarchical RL, and approaches that integrate planning or memory mechanisms, depending on the complexity of the task and environment. DRL has been applied to wide range of domains that require sequential decision-making and the ability to learn from high-dimensional input data. One of the most well-known applications is ingames, where DRL agents have demonstrated performance comparable to or exceeding human-level benchmarks. DeepMind's AlphaGo and AlphaStar, as well as OpenAI Five, are notable examples of DRL systems mastering complex games such asGo,StarCraft II, andDota 2.[7]While these systems have demonstrated high performance in constrained environments, their success often depends on extensive computational resources and may not generalize easily to tasks outside their training domains. Inrobotics, DRL has been used to train agents for tasks such as locomotion, manipulation, and navigation in both simulated and real-world environments. By learning directly from sensory input, DRL enables robots to adapt to complex dynamics without relying on hand-crafted control rules.[8] Other growing areas of application includefinance(e.g., portfolio optimization),healthcare(e.g., treatment planning and medical decision-making),natural language processing(e.g., dialogue systems), andautonomous vehicles(e.g., path planning and control).All of these applications shows how DRL deals with real-world problems like uncertainty, sequential reasoning, and high-dimensional data.[9] DRL has several significant challenges which limit its broader deployment. One of the most prominent issues is sample inefficiency. DRL algorithms often require millions of interactions with the environment to learn effective policies, which is impractical in many real-world settings where data collection is expensive or time-consuming.[10] Another challenge is sparse or delayed reward problem, where feedback signals are infrequent, which makes it difficult for agents to attribute outcomes to specific decisions. Techniques such as reward shaping and exploration strategies have been developed to address this issue.[11] DRL systems also tend to be sensitive to hyperparameters and lack robustness across tasks or environments. Models that are trained in simulation fail very often when deployed in the real world due to discrepancies between simulated and real-world dynamics, a problem known as the "reality gap."Bias and fairness in DRL systems have also emerged as concerns, particularly in domains like healthcare and finance where imbalanced data can lead to unequal outcomes for underrepresented groups. Additionally, concerns about safety, interpretability, and reproducibility have become increasingly important, especially in high-stakes domains such as healthcare or autonomous driving. These issues remain active areas of research in the DRL community. Recent developments in DRL have introduced new architectures and training strategies which aims to improving performance, efficiency, and generalization. One key area of progress is model-based reinforcement learning, where agents learn an internal model of the environment to simulate outcomes before acting. This kind off approach improves sample efficiency and planning. An example is the Dreamer algorithm, which learns a latent space model to train agents more efficiently in complex environments.[12] Another major innovation is the use of transformer-based architectures in DRL. Unlike traditional models that rely on recurrent or convolutional networks, transformers can model long-term dependencies more effectively. The Decision Transformer and other similar models treat RL as a sequence modeling problem, enabling agents to generalize better across tasks.[13] In addition, research into open-ended learning has led to the creation of  capable agents that are able to solve a range of tasks without task-specific tuning. Similar systems like the ones that are developed by OpenAI show that agents trained in diverse, evolving environments can generalize across new challenges, moving toward more adaptive and flexible intelligence.[14] As deep reinforcement learning continues to evolve, researchers are exploring ways to make algorithms more efficient, robust, and generalizable across a wide range of tasks. Improving sample efficiency through model-based learning, enhancing generalization with open-ended training environments, and integrating foundation models are among the current research goals. similar area of interest is safe and ethical deployment, particularly in high-risk settings like healthcare, autonomous driving, and finance. Researchers are developing frameworks for safer exploration, interpretability, and better alignment with human values.Ensuring that DRL systems promote equitable outcomes remains an ongoing challenge, especially where historical data may under‑represent marginalized populations. The future of DRL may also involve more integration with other subfields of machine learning, such as unsupervised learning, transfer learning, and large language models, enabling agents that can learn from diverse data modalities and interact more naturally with human users.[15]
https://en.wikipedia.org/wiki/End-to-end_reinforcement_learning
InPAC learning,error tolerancerefers to the ability of analgorithmto learn when the examples received have been corrupted in some way. In fact, this is a very common and important issue since in many applications it is not possible to access noise-free data. Noise can interfere with the learning process at different levels: the algorithm may receive data that have been occasionally mislabeled, or the inputs may have some false information, or the classification of the examples may have been maliciously adulterated. In the following, letX{\displaystyle X}be ourn{\displaystyle n}-dimensional input space. LetH{\displaystyle {\mathcal {H}}}be a class of functions that we wish to use in order to learn a{0,1}{\displaystyle \{0,1\}}-valued target functionf{\displaystyle f}defined overX{\displaystyle X}. LetD{\displaystyle {\mathcal {D}}}be the distribution of the inputs overX{\displaystyle X}. The goal of a learning algorithmA{\displaystyle {\mathcal {A}}}is to choose the best functionh∈H{\displaystyle h\in {\mathcal {H}}}such that it minimizeserror(h)=Px∼D(h(x)≠f(x)){\displaystyle error(h)=P_{x\sim {\mathcal {D}}}(h(x)\neq f(x))}. Let us suppose we have a functionsize(f){\displaystyle size(f)}that can measure the complexity off{\displaystyle f}. LetOracle(x){\displaystyle {\text{Oracle}}(x)}be an oracle that, whenever called, returns an examplex{\displaystyle x}and its correct labelf(x){\displaystyle f(x)}. When no noise corrupts the data, we can definelearning in the Valiant setting:[1][2] Definition:We say thatf{\displaystyle f}is efficiently learnable usingH{\displaystyle {\mathcal {H}}}in theValiantsetting if there exists a learning algorithmA{\displaystyle {\mathcal {A}}}that has access toOracle(x){\displaystyle {\text{Oracle}}(x)}and a polynomialp(⋅,⋅,⋅,⋅){\displaystyle p(\cdot ,\cdot ,\cdot ,\cdot )}such that for any0<ε≤1{\displaystyle 0<\varepsilon \leq 1}and0<δ≤1{\displaystyle 0<\delta \leq 1}it outputs, in a number of calls to the oracle bounded byp(1ε,1δ,n,size(f)){\displaystyle p\left({\frac {1}{\varepsilon }},{\frac {1}{\delta }},n,{\text{size}}(f)\right)}, a functionh∈H{\displaystyle h\in {\mathcal {H}}}that satisfies with probability at least1−δ{\displaystyle 1-\delta }the conditionerror(h)≤ε{\displaystyle {\text{error}}(h)\leq \varepsilon }. In the following we will define learnability off{\displaystyle f}when data have suffered some modification.[3][4][5] In the classification noise model[6]anoise rate0≤η<12{\displaystyle 0\leq \eta <{\frac {1}{2}}}is introduced. Then, instead ofOracle(x){\displaystyle {\text{Oracle}}(x)}that returns always the correct label of examplex{\displaystyle x}, algorithmA{\displaystyle {\mathcal {A}}}can only call a faulty oracleOracle(x,η){\displaystyle {\text{Oracle}}(x,\eta )}that will flip the label ofx{\displaystyle x}with probabilityη{\displaystyle \eta }. As in the Valiant case, the goal of a learning algorithmA{\displaystyle {\mathcal {A}}}is to choose the best functionh∈H{\displaystyle h\in {\mathcal {H}}}such that it minimizeserror(h)=Px∼D(h(x)≠f(x)){\displaystyle error(h)=P_{x\sim {\mathcal {D}}}(h(x)\neq f(x))}. In applications it is difficult to have access to the real value ofη{\displaystyle \eta }, but we assume we have access to its upperboundηB{\displaystyle \eta _{B}}.[7]Note that if we allow the noise rate to be1/2{\displaystyle 1/2}, then learning becomes impossible in any amount of computation time, because every label conveys no information about the target function. Definition:We say thatf{\displaystyle f}is efficiently learnable usingH{\displaystyle {\mathcal {H}}}in theclassification noise modelif there exists a learning algorithmA{\displaystyle {\mathcal {A}}}that has access toOracle(x,η){\displaystyle {\text{Oracle}}(x,\eta )}and a polynomialp(⋅,⋅,⋅,⋅){\displaystyle p(\cdot ,\cdot ,\cdot ,\cdot )}such that for any0≤η≤12{\displaystyle 0\leq \eta \leq {\frac {1}{2}}},0≤ε≤1{\displaystyle 0\leq \varepsilon \leq 1}and0≤δ≤1{\displaystyle 0\leq \delta \leq 1}it outputs, in a number of calls to the oracle bounded byp(11−2ηB,1ε,1δ,n,size(f)){\displaystyle p\left({\frac {1}{1-2\eta _{B}}},{\frac {1}{\varepsilon }},{\frac {1}{\delta }},n,size(f)\right)}, a functionh∈H{\displaystyle h\in {\mathcal {H}}}that satisfies with probability at least1−δ{\displaystyle 1-\delta }the conditionerror(h)≤ε{\displaystyle error(h)\leq \varepsilon }. Statistical Query Learning[8]is a kind ofactive learningproblem in which the learning algorithmA{\displaystyle {\mathcal {A}}}can decide if to request information about the likelihoodPf(x){\displaystyle P_{f(x)}}that a functionf{\displaystyle f}correctly labels examplex{\displaystyle x}, and receives an answer accurate within a toleranceα{\displaystyle \alpha }. Formally, whenever the learning algorithmA{\displaystyle {\mathcal {A}}}calls the oracleOracle(x,α){\displaystyle {\text{Oracle}}(x,\alpha )}, it receives as feedback probabilityQf(x){\displaystyle Q_{f(x)}}, such thatQf(x)−α≤Pf(x)≤Qf(x)+α{\displaystyle Q_{f(x)}-\alpha \leq P_{f(x)}\leq Q_{f(x)}+\alpha }. Definition:We say thatf{\displaystyle f}is efficiently learnable usingH{\displaystyle {\mathcal {H}}}in thestatistical query learning modelif there exists a learning algorithmA{\displaystyle {\mathcal {A}}}that has access toOracle(x,α){\displaystyle {\text{Oracle}}(x,\alpha )}and polynomialsp(⋅,⋅,⋅){\displaystyle p(\cdot ,\cdot ,\cdot )},q(⋅,⋅,⋅){\displaystyle q(\cdot ,\cdot ,\cdot )}, andr(⋅,⋅,⋅){\displaystyle r(\cdot ,\cdot ,\cdot )}such that for any0<ε≤1{\displaystyle 0<\varepsilon \leq 1}the following hold: Note that the confidence parameterδ{\displaystyle \delta }does not appear in the definition of learning. This is because the main purpose ofδ{\displaystyle \delta }is to allow the learning algorithm a small probability of failure due to an unrepresentative sample. Since nowOracle(x,α){\displaystyle {\text{Oracle}}(x,\alpha )}always guarantees to meet the approximation criterionQf(x)−α≤Pf(x)≤Qf(x)+α{\displaystyle Q_{f(x)}-\alpha \leq P_{f(x)}\leq Q_{f(x)}+\alpha }, the failure probability is no longer needed. The statistical query model is strictly weaker than the PAC model: any efficiently SQ-learnable class is efficiently PAC learnable in the presence of classification noise, but there exist efficient PAC-learnable problems such asparitythat are not efficiently SQ-learnable.[8] In the malicious classification model[9]an adversary generates errors to foil the learning algorithm. This setting describes situations oferror burst, which may occur when for a limited time transmission equipment malfunctions repeatedly. Formally, algorithmA{\displaystyle {\mathcal {A}}}calls an oracleOracle(x,β){\displaystyle {\text{Oracle}}(x,\beta )}that returns a correctly labeled examplex{\displaystyle x}drawn, as usual, from distributionD{\displaystyle {\mathcal {D}}}over the input space with probability1−β{\displaystyle 1-\beta }, but it returns with probabilityβ{\displaystyle \beta }an example drawn from a distribution that is not related toD{\displaystyle {\mathcal {D}}}. Moreover, this maliciously chosen example may strategically selected by an adversary who has knowledge off{\displaystyle f},β{\displaystyle \beta },D{\displaystyle {\mathcal {D}}}, or the current progress of the learning algorithm. Definition:Given a boundβB<12{\displaystyle \beta _{B}<{\frac {1}{2}}}for0≤β<12{\displaystyle 0\leq \beta <{\frac {1}{2}}}, we say thatf{\displaystyle f}is efficiently learnable usingH{\displaystyle {\mathcal {H}}}in the malicious classification model, if there exist a learning algorithmA{\displaystyle {\mathcal {A}}}that has access toOracle(x,β){\displaystyle {\text{Oracle}}(x,\beta )}and a polynomialp(⋅,⋅,⋅,⋅,⋅){\displaystyle p(\cdot ,\cdot ,\cdot ,\cdot ,\cdot )}such that for any0<ε≤1{\displaystyle 0<\varepsilon \leq 1},0<δ≤1{\displaystyle 0<\delta \leq 1}it outputs, in a number of calls to the oracle bounded byp(11/2−βB,1ε,1δ,n,size(f)){\displaystyle p\left({\frac {1}{1/2-\beta _{B}}},{\frac {1}{\varepsilon }},{\frac {1}{\delta }},n,size(f)\right)}, a functionh∈H{\displaystyle h\in {\mathcal {H}}}that satisfies with probability at least1−δ{\displaystyle 1-\delta }the conditionerror(h)≤ε{\displaystyle error(h)\leq \varepsilon }. In the nonuniform random attribute noise[10][11]model the algorithm is learning aBoolean function, a malicious oracleOracle(x,ν){\displaystyle {\text{Oracle}}(x,\nu )}may flip eachi{\displaystyle i}-th bit of examplex=(x1,x2,…,xn){\displaystyle x=(x_{1},x_{2},\ldots ,x_{n})}independently with probabilityνi≤ν{\displaystyle \nu _{i}\leq \nu }. This type of error can irreparably foil the algorithm, in fact the following theorem holds: In the nonuniform random attribute noise setting, an algorithmA{\displaystyle {\mathcal {A}}}can output a functionh∈H{\displaystyle h\in {\mathcal {H}}}such thaterror(h)<ε{\displaystyle error(h)<\varepsilon }only ifν<2ε{\displaystyle \nu <2\varepsilon }.
https://en.wikipedia.org/wiki/Error_tolerance_(PAC_learning)
Inmachine learningandpattern recognition, afeatureis an individual measurable property or characteristic of a data set.[1]Choosing informative, discriminating, and independent features is crucial to produce effectivealgorithmsforpattern recognition,classification, andregressiontasks. Features are usually numeric, but other types such asstringsandgraphsare used insyntactic pattern recognition, after some pre-processing step such asone-hot encoding. The concept of "features" is related to that ofexplanatory variablesused in statistical techniques such aslinear regression. In feature engineering, two types of features are commonly used: numerical and categorical. Numerical features are continuous values that can be measured on a scale. Examples of numerical features include age, height, weight, and income. Numerical features can be used in machine learning algorithms directly.[citation needed] Categorical featuresare discrete values that can be grouped into categories. Examples of categorical features include gender, color, and zip code. Categorical features typically need to be converted to numerical features before they can be used in machine learning algorithms. This can be done using a variety of techniques, such as one-hot encoding, label encoding, and ordinal encoding. The type of feature that is used in feature engineering depends on the specific machine learning algorithm that is being used. Some machine learning algorithms, such as decision trees, can handle both numerical and categorical features. Other machine learning algorithms, such as linear regression, can only handle numerical features. A numeric feature can be conveniently described by a feature vector. One way to achievebinary classificationis using alinear predictor function(related to theperceptron) with a feature vector as input. The method consists of calculating thescalar productbetween the feature vector and a vector of weights, qualifying those observations whose result exceeds a threshold. Algorithms for classification from a feature vector includenearest neighbor classification,neural networks, andstatistical techniquessuch asBayesian approaches. Incharacter recognition, features may includehistogramscounting the number of black pixels along horizontal and vertical directions, number of internal holes, stroke detection and many others. Inspeech recognition, features for recognizingphonemescan include noise ratios, length of sounds, relative power, filter matches and many others. Inspamdetection algorithms, features may include the presence or absence of certain email headers, the email structure, the language, the frequency of specific terms, the grammatical correctness of the text. Incomputer vision, there are a large number of possiblefeatures, such as edges and objects. Inpattern recognitionandmachine learning, afeature vectoris an n-dimensionalvectorof numerical features that represent some object. Manyalgorithmsin machine learning require a numerical representation of objects, since such representations facilitate processing and statistical analysis. When representing images, the feature values might correspond to the pixels of an image, while when representing texts the features might be the frequencies of occurrence of textual terms. Feature vectors are equivalent to the vectors ofexplanatory variablesused instatisticalprocedures such aslinear regression. Feature vectors are often combined with weights using adot productin order to construct alinear predictor functionthat is used to determine a score for making a prediction. Thevector spaceassociated with these vectors is often called thefeature space. In order to reduce the dimensionality of the feature space, a number ofdimensionality reductiontechniques can be employed. Higher-level features can be obtained from already available features and added to the feature vector; for example, for the study of diseases the feature 'Age' is useful and is defined asAge = 'Year of death' minus 'Year of birth'. This process is referred to asfeature construction.[2][3]Feature construction is the application of a set of constructive operators to a set of existing features resulting in construction of new features. Examples of such constructive operators include checking for the equality conditions {=, ≠}, the arithmetic operators {+,−,×, /}, the array operators {max(S), min(S), average(S)} as well as other more sophisticated operators, for example count(S,C)[4]that counts the number of features in the feature vector S satisfying some condition C or, for example, distances to other recognition classes generalized by some accepting device. Feature construction has long been considered a powerful tool for increasing both accuracy and understanding of structure, particularly in high-dimensional problems.[5]Applications include studies of disease andemotion recognitionfrom speech.[6] The initial set of raw features can be redundant and large enough that estimation and optimization is made difficult or ineffective. Therefore, a preliminary step in many applications ofmachine learningandpattern recognitionconsists ofselectinga subset of features, orconstructinga new and reduced set of features to facilitate learning, and to improve generalization and interpretability.[7] Extracting or selecting features is a combination of art and science; developing systems to do so is known asfeature engineering. It requires the experimentation of multiple possibilities and the combination of automated techniques with the intuition and knowledge of thedomain expert. Automating this process isfeature learning, where a machine not only uses features for learning, but learns the features itself.
https://en.wikipedia.org/wiki/Feature_(machine_learning)
Inferential Theory of Learning(ITL) is an area ofmachine learningwhich describes inferential processes performed by learning agents. ITL has been continuously developed byRyszard S. Michalski, starting in the 1980s. The first known publication of ITL was in 1983.[1]In the ITLlearning processis viewed as a search (inference) through hypotheses space guided by a specific goal. The results of learning need to bestored. Stored information will later be used by the learner for futureinferences.[2]Inferences are split into multiple categories includingconclusive, deduction, and induction.In order for an inference to be considered complete it was required that all categories must be taken into account.[3]This is how the ITL varies from other machine learning theories likeComputational Learning TheoryandStatistical Learning Theory; which both use singular forms of inference. The most relevant published usage of ITL was in scientific journal published in 2012 and used ITL as a way to describe how agent-based learning works. According to the journal "The Inferential Theory of Learning (ITL) provides an elegant way of describing learning processes by agents".[4] Thisartificial intelligence-related article is astub. You can help Wikipedia byexpanding it. Thissystems-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Inferential_theory_of_learning
Alearning automatonis one type ofmachine learningalgorithm studied since 1970s. Learning automata select their current action based on past experiences from the environment. It will fall into the range of reinforcement learning if the environment isstochasticand aMarkov decision process(MDP) is used. Research in learning automata can be traced back to the work ofMichael Lvovitch Tsetlinin the early 1960s in the Soviet Union. Together with some colleagues, he published a collection of papers on how to use matrices to describe automata functions. Additionally, Tsetlin worked onreasonableandcollective automata behaviour, and onautomata games. Learning automata were also investigated by researches in the United States in the 1960s. However, the termlearning automatonwas not used until Narendra and Thathachar introduced it in a survey paper in 1974. A learning automaton is an adaptive decision-making unit situated in a random environment that learns the optimal action through repeated interactions with its environment. The actions are chosen according to a specific probability distribution which is updated based on the environment response the automaton obtains by performing a particular action. With respect to the field ofreinforcement learning, learning automata are characterized aspolicy iterators. In contrast to other reinforcement learners, policy iterators directly manipulate the policy π. Another example for policy iterators areevolutionary algorithms. Formally, Narendra and Thathachar define astochastic automatonto consist of: In their paper, they investigate only stochastic automata withr=sandGbeingbijective, allowing them to confuse actions and states. The states of such an automaton correspond to the states of a "discrete-state discrete-parameterMarkov process".[1]At each time stept=0,1,2,3,..., the automaton reads an input from its environment, updatesp(t) top(t+1) byA, randomly chooses a successor state according to the probabilitiesp(t+1) and outputs the corresponding action. The automaton's environment, in turn, reads the action and sends the next input to the automaton. Frequently, the input setX= { 0,1 } is used, with 0 and 1 corresponding to anonpenaltyand apenaltyresponse of the environment, respectively; in this case, the automaton should learn to minimize the number ofpenaltyresponses, and the feedback loop of automaton and environment is called a "P-model". More generally, a "Q-model" allows an arbitrary finite input setX, and an "S-model" uses theinterval[0,1] ofreal numbersasX.[2] A visualised demo[3][4]/ Art Work of a single Learning Automaton had been developed by μSystems (microSystems) Research Group at Newcastle University. Finite action-set learning automata (FALA) are a class of learning automata for which the number of possible actions is finite or, in more mathematical terms, for which the size of the action-set is finite.[5]
https://en.wikipedia.org/wiki/Learning_automata
Anartificial neural network'slearning ruleorlearning processis a method, mathematical logic oralgorithmwhich improves the network's performance and/or training time. Usually, this rule is applied repeatedly over the network. It is done by updating theweight and bias[broken anchor]levels of a network when it is simulated in a specific data environment.[1]A learning rule may accept existing conditions (weights and biases) of the network, and will compare the expected result and actual result of the network to give new and improved values for the weights and biases.[2]Depending on the complexity of the model being simulated, the learning rule of the network can be as simple as anXOR gateormean squared error, or as complex as the result of asystem of differential equations. The learning rule is one of the factors which decides how fast or how accurately the neural network can be developed. Depending on the process to develop the network, there are three main paradigms of machine learning: A lot of the learning methods in machine learning work similar to each other, and are based on each other, which makes it difficult to classify them in clear categories. But they can be broadly understood in 4 categories of learning methods, though these categories don't have clear boundaries and they tend to belong to multiple categories of learning methods[3]- It is to be noted that though these learning rules might appear to be based on similar ideas, they do have subtle differences, as they are a generalisation or application over the previous rule, and hence it makes sense to study them separately based on their origins and intents. Developed by Donald Hebb in 1949 to describe biological neuron firing. In the mid-1950s it was also applied to computer simulations of neural networks. Δwi=ηxiy{\displaystyle \Delta w_{i}=\eta x_{i}y} Whereη{\displaystyle \eta }represents the learning rate,xi{\displaystyle x_{i}}represents the input of neuron i, and y is the output of the neuron. It has been shown that Hebb's rule in its basic form is unstable.Oja's Rule,BCM Theoryare other learning rules built on top of or alongside Hebb's Rule in the study of biological neurons. The perceptron learning rule originates from the Hebbian assumption, and was used byFrank Rosenblattin his perceptron in 1958. The net is passed to the activation (transfer) function and the function's output is used for adjusting the weights. The learning signal is the difference between the desired response and the actual response of a neuron. The step function is often used as an activation function, and the outputs are generally restricted to -1, 0, or 1. The weights are updated with wnew=wold+η(t−o)xi{\displaystyle w_{\text{new}}=w_{\text{old}}+\eta (t-o)x_{i}}where "t" is the target value and "o"is the output of the perceptron, andη{\displaystyle \eta }is called the learning rate. The algorithm converges to the correct classification if:[5] *It should also be noted that a single layer perceptron with this learning rule is incapable of working on linearly non-separable inputs, and hence theXOR problemcannot be solved using this rule alone[6] Seppo Linnainmaain 1970 is said to have developed the Backpropagation Algorithm[7]but the origins of the algorithm go back to the 1960s with many contributors. It is a generalisation of theleast mean squares algorithmin the linear perceptron and the Delta Learning Rule. It implements gradient descent search through the space possible network weights, iteratively reducing the error, between the target values and the network outputs. Similar to the perceptron learning rule but with different origin. It was developed for use in theADALAINEnetwork, which differs from the Perceptron mainly in terms of the training. The weights are adjusted according to the weighted sum of the inputs (the net), whereas in perceptron the sign of the weighted sum was useful for determining the output as the threshold was set to 0, -1, or +1. This makes ADALINE different from the normal perceptron. Delta rule (DR) is similar to the Perceptron Learning Rule (PLR), with some differences: Sometimes only when the Widrow-Hoff is applied to binary targets specifically, it is referred to as Delta Rule, but the terms seem to be used often interchangeably. The delta rule is considered to a special case of theback-propagation algorithm. Delta rule also closely resembles theRescorla-Wagner modelunder which Pavlovian conditioning occurs.[8] Competitive learningis considered a variant ofHebbian learning, but it is special enough to be discussed separately. Competitive learning works by increasing the specialization of each node in the network. It is well suited to findingclusterswithin data. Models and algorithms based on the principle of competitive learning includevector quantizationandself-organizing maps(Kohonen maps).
https://en.wikipedia.org/wiki/Learning_rule
Incryptography,learning with errors(LWE) is a mathematical problem that is widely used to create secureencryption algorithms.[1]It is based on the idea of representing secret information as a set of equations with errors. In other words, LWE is a way to hide the value of a secret by introducing noise to it.[2]In more technical terms, it refers to thecomputational problemof inferring a linearn{\displaystyle n}-ary functionf{\displaystyle f}over a finiteringfrom given samplesyi=f(xi){\displaystyle y_{i}=f(\mathbf {x} _{i})}some of which may be erroneous. The LWE problem is conjectured to be hard to solve,[1]and thus to be useful in cryptography. More precisely, the LWE problem is defined as follows. LetZq{\displaystyle \mathbb {Z} _{q}}denote the ring of integersmoduloq{\displaystyle q}and letZqn{\displaystyle \mathbb {Z} _{q}^{n}}denote the set ofn{\displaystyle n}-vectorsoverZq{\displaystyle \mathbb {Z} _{q}}. There exists a certain unknown linear functionf:Zqn→Zq{\displaystyle f:\mathbb {Z} _{q}^{n}\rightarrow \mathbb {Z} _{q}}, and the input to the LWE problem is a sample of pairs(x,y){\displaystyle (\mathbf {x} ,y)}, wherex∈Zqn{\displaystyle \mathbf {x} \in \mathbb {Z} _{q}^{n}}andy∈Zq{\displaystyle y\in \mathbb {Z} _{q}}, so that with high probabilityy=f(x){\displaystyle y=f(\mathbf {x} )}. Furthermore, the deviation from the equality is according to some known noise model. The problem calls for finding the functionf{\displaystyle f}, or some close approximation thereof, with high probability. The LWE problem was introduced byOded Regevin 2005[3](who won the 2018Gödel Prizefor this work); it is a generalization of theparity learningproblem. Regev showed that the LWE problem is as hard to solve as several worst-caselattice problems. Subsequently, the LWE problem has been used as ahardness assumptionto createpublic-key cryptosystems,[3][4]such as thering learning with errors key exchangeby Peikert.[5] Denote byT=R/Z{\displaystyle \mathbb {T} =\mathbb {R} /\mathbb {Z} }theadditive group on reals modulo one. Lets∈Zqn{\displaystyle \mathbf {s} \in \mathbb {Z} _{q}^{n}}be a fixed vector. Letϕ{\displaystyle \phi }be a fixed probability distribution overT{\displaystyle \mathbb {T} }. Denote byAs,ϕ{\displaystyle A_{\mathbf {s} ,\phi }}the distribution onZqn×T{\displaystyle \mathbb {Z} _{q}^{n}\times \mathbb {T} }obtained as follows. Thelearning with errors problemLWEq,ϕ{\displaystyle \mathrm {LWE} _{q,\phi }}is to finds∈Zqn{\displaystyle \mathbf {s} \in \mathbb {Z} _{q}^{n}}, given access to polynomially many samples of choice fromAs,ϕ{\displaystyle A_{\mathbf {s} ,\phi }}. For everyα>0{\displaystyle \alpha >0}, denote byDα{\displaystyle D_{\alpha }}the one-dimensionalGaussianwith zero mean and varianceα2/(2π){\displaystyle \alpha ^{2}/(2\pi )}, that is, the density function isDα(x)=ρα(x)/α{\displaystyle D_{\alpha }(x)=\rho _{\alpha }(x)/\alpha }whereρα(x)=e−π(|x|/α)2{\displaystyle \rho _{\alpha }(x)=e^{-\pi (|x|/\alpha )^{2}}}, and letΨα{\displaystyle \Psi _{\alpha }}be the distribution onT{\displaystyle \mathbb {T} }obtained by consideringDα{\displaystyle D_{\alpha }}modulo one. The version of LWE considered in most of the results would beLWEq,Ψα{\displaystyle \mathrm {LWE} _{q,\Psi _{\alpha }}} TheLWEproblem described above is thesearchversion of the problem. In thedecisionversion (DLWE), the goal is to distinguish between noisy inner products and uniformly random samples fromZqn×T{\displaystyle \mathbb {Z} _{q}^{n}\times \mathbb {T} }(practically, some discretized version of it). Regev[3]showed that thedecisionandsearchversions are equivalent whenq{\displaystyle q}is a prime bounded by some polynomial inn{\displaystyle n}. Intuitively, if we have a procedure for the search problem, the decision version can be solved easily: just feed the input samples for the decision problem to the solver for the search problem. Denote the given samples by{(ai,bi)}⊂Zqn×T{\displaystyle \{(\mathbf {a} _{i},\mathbf {b} _{i})\}\subset \mathbb {Z} _{q}^{n}\times \mathbb {T} }. If the solver returns a candidates{\displaystyle \mathbf {s} }, for alli{\displaystyle i}, calculate{⟨ai,s⟩−bi}{\displaystyle \{\langle \mathbf {a} _{i},\mathbf {s} \rangle -\mathbf {b} _{i}\}}. If the samples are from an LWE distribution, then the results of this calculation will be distributed accordingχ{\displaystyle \chi }, but if the samples are uniformly random, these quantities will be distributed uniformly as well. For the other direction, given a solver for the decision problem, the search version can be solved as follows: Recovers{\displaystyle \mathbf {s} }one coordinate at a time. To obtain the first coordinate,s1{\displaystyle \mathbf {s} _{1}}, make a guessk∈Zq{\displaystyle k\in \mathbb {Z} _{q}}, and do the following. Choose a numberr∈Zq{\displaystyle r\in \mathbb {Z} _{q}}uniformly at random. Transform the given samples{(ai,bi)}⊂Zqn×T{\displaystyle \{(\mathbf {a} _{i},\mathbf {b} _{i})\}\subset \mathbb {Z} _{q}^{n}\times \mathbb {T} }as follows. Calculate{(ai+(r,0,…,0),bi+(rk)/q)}{\displaystyle \{(\mathbf {a} _{i}+(r,0,\ldots ,0),\mathbf {b} _{i}+(rk)/q)\}}. Send the transformed samples to the decision solver. If the guessk{\displaystyle k}was correct, the transformation takes the distributionAs,χ{\displaystyle A_{\mathbf {s} ,\chi }}to itself, and otherwise, sinceq{\displaystyle q}is prime, it takes it to the uniform distribution. So, given a polynomial-time solver for the decision problem that errs with very small probability, sinceq{\displaystyle q}is bounded by some polynomial inn{\displaystyle n}, it only takes polynomial time to guess every possible value fork{\displaystyle k}and use the solver to see which one is correct. After obtainings1{\displaystyle \mathbf {s} _{1}}, we follow an analogous procedure for each other coordinatesj{\displaystyle \mathbf {s} _{j}}. Namely, we transform ourbi{\displaystyle \mathbf {b} _{i}}samples the same way, and transform ourai{\displaystyle \mathbf {a} _{i}}samples by calculatingai+(0,…,r,…,0){\displaystyle \mathbf {a} _{i}+(0,\ldots ,r,\ldots ,0)}, where ther{\displaystyle r}is in thejth{\displaystyle j^{\text{th}}}coordinate.[3] Peikert[4]showed that this reduction, with a small modification, works for anyq{\displaystyle q}that is a product of distinct, small (polynomial inn{\displaystyle n}) primes. The main idea is ifq=q1q2⋯qt{\displaystyle q=q_{1}q_{2}\cdots q_{t}}, for eachqℓ{\displaystyle q_{\ell }}, guess and check to see ifsj{\displaystyle \mathbf {s} _{j}}is congruent to0modqℓ{\displaystyle 0\mod q_{\ell }}, and then use theChinese remainder theoremto recoversj{\displaystyle \mathbf {s} _{j}}. Regev[3]showed therandom self-reducibilityof theLWEandDLWEproblems for arbitraryq{\displaystyle q}andχ{\displaystyle \chi }. Given samples{(ai,bi)}{\displaystyle \{(\mathbf {a} _{i},\mathbf {b} _{i})\}}fromAs,χ{\displaystyle A_{\mathbf {s} ,\chi }}, it is easy to see that{(ai,bi+⟨ai,t⟩)/q}{\displaystyle \{(\mathbf {a} _{i},\mathbf {b} _{i}+\langle \mathbf {a} _{i},\mathbf {t} \rangle )/q\}}are samples fromAs+t,χ{\displaystyle A_{\mathbf {s} +\mathbf {t} ,\chi }}. So, suppose there was some setS⊂Zqn{\displaystyle {\mathcal {S}}\subset \mathbb {Z} _{q}^{n}}such that|S|/|Zqn|=1/poly⁡(n){\displaystyle |{\mathcal {S}}|/|\mathbb {Z} _{q}^{n}|=1/\operatorname {poly} (n)}, and for distributionsAs′,χ{\displaystyle A_{\mathbf {s} ',\chi }}, withs′←S{\displaystyle \mathbf {s} '\leftarrow {\mathcal {S}}},DLWEwas easy. Then there would be some distinguisherA{\displaystyle {\mathcal {A}}}, who, given samples{(ai,bi)}{\displaystyle \{(\mathbf {a} _{i},\mathbf {b} _{i})\}}, could tell whether they were uniformly random or fromAs′,χ{\displaystyle A_{\mathbf {s} ',\chi }}. If we need to distinguish uniformly random samples fromAs,χ{\displaystyle A_{\mathbf {s} ,\chi }}, wheres{\displaystyle \mathbf {s} }is chosen uniformly at random fromZqn{\displaystyle \mathbb {Z} _{q}^{n}}, we could simply try different valuest{\displaystyle \mathbf {t} }sampled uniformly at random fromZqn{\displaystyle \mathbb {Z} _{q}^{n}}, calculate{(ai,bi+⟨ai,t⟩)/q}{\displaystyle \{(\mathbf {a} _{i},\mathbf {b} _{i}+\langle \mathbf {a} _{i},\mathbf {t} \rangle )/q\}}and feed these samples toA{\displaystyle {\mathcal {A}}}. SinceS{\displaystyle {\mathcal {S}}}comprises a large fraction ofZqn{\displaystyle \mathbb {Z} _{q}^{n}}, with high probability, if we choose a polynomial number of values fort{\displaystyle \mathbf {t} }, we will find one such thats+t∈S{\displaystyle \mathbf {s} +\mathbf {t} \in {\mathcal {S}}}, andA{\displaystyle {\mathcal {A}}}will successfully distinguish the samples. Thus, no suchS{\displaystyle {\mathcal {S}}}can exist, meaningLWEandDLWEare (up to a polynomial factor) as hard in the average case as they are in the worst case. For an-dimensional latticeL{\displaystyle L}, letsmoothing parameterηε(L){\displaystyle \eta _{\varepsilon }(L)}denote the smallests{\displaystyle s}such thatρ1/s(L∗∖{0})≤ε{\displaystyle \rho _{1/s}(L^{*}\setminus \{\mathbf {0} \})\leq \varepsilon }whereL∗{\displaystyle L^{*}}is the dual ofL{\displaystyle L}andρα(x)=e−π(|x|/α)2{\displaystyle \rho _{\alpha }(x)=e^{-\pi (|x|/\alpha )^{2}}}is extended to sets by summing over function values at each element in the set. LetDL,r{\displaystyle D_{L,r}}denote the discrete Gaussian distribution onL{\displaystyle L}of widthr{\displaystyle r}for a latticeL{\displaystyle L}and realr>0{\displaystyle r>0}. The probability of eachx∈L{\displaystyle x\in L}is proportional toρr(x){\displaystyle \rho _{r}(x)}. Thediscrete Gaussian sampling problem(DGS) is defined as follows: An instance ofDGSϕ{\displaystyle DGS_{\phi }}is given by ann{\displaystyle n}-dimensional latticeL{\displaystyle L}and a numberr≥ϕ(L){\displaystyle r\geq \phi (L)}. The goal is to output a sample fromDL,r{\displaystyle D_{L,r}}. Regev shows that there is a reduction fromGapSVP100nγ(n){\displaystyle \operatorname {GapSVP} _{100{\sqrt {n}}\gamma (n)}}toDGSnγ(n)/λ(L∗){\displaystyle DGS_{{\sqrt {n}}\gamma (n)/\lambda (L^{*})}}for any functionγ(n)≥1{\displaystyle \gamma (n)\geq 1}. Regev then shows that there exists an efficient quantum algorithm forDGS2nηε(L)/α{\displaystyle DGS_{{\sqrt {2n}}\eta _{\varepsilon }(L)/\alpha }}given access to an oracle forLWEq,Ψα{\displaystyle \mathrm {LWE} _{q,\Psi _{\alpha }}}for integerq{\displaystyle q}andα∈(0,1){\displaystyle \alpha \in (0,1)}such thatαq>2n{\displaystyle \alpha q>2{\sqrt {n}}}. This implies the hardness for LWE. Although the proof of this assertion works for anyq{\displaystyle q}, for creating a cryptosystem, the modulusq{\displaystyle q}has to be polynomial inn{\displaystyle n}. Peikert proves[4]that there is a probabilistic polynomial time reduction from theGapSVPζ,γ{\displaystyle \operatorname {GapSVP} _{\zeta ,\gamma }}problem in the worst case to solvingLWEq,Ψα{\displaystyle \mathrm {LWE} _{q,\Psi _{\alpha }}}usingpoly⁡(n){\displaystyle \operatorname {poly} (n)}samples for parametersα∈(0,1){\displaystyle \alpha \in (0,1)},γ(n)≥n/(αlog⁡n){\displaystyle \gamma (n)\geq n/(\alpha {\sqrt {\log n}})},ζ(n)≥γ(n){\displaystyle \zeta (n)\geq \gamma (n)}andq≥(ζ/n)ωlog⁡n){\displaystyle q\geq (\zeta /{\sqrt {n}})\omega {\sqrt {\log n}})}. TheLWEproblem serves as a versatile problem used in construction of several[3][4][6][7]cryptosystems. In 2005, Regev[3]showed that the decision version of LWE is hard assuming quantum hardness of thelattice problemsGapSVPγ{\displaystyle \mathrm {GapSVP} _{\gamma }}(forγ{\displaystyle \gamma }as above) andSIVPt{\displaystyle \mathrm {SIVP} _{t}}witht=O(n/α){\displaystyle t=O(n/\alpha )}). In 2009, Peikert[4]proved a similar result assuming only the classical hardness of the related problemGapSVPζ,γ{\displaystyle \mathrm {GapSVP} _{\zeta ,\gamma }}. The disadvantage of Peikert's result is that it bases itself on a non-standard version of an easier (when compared to SIVP) problem GapSVP. Regev[3]proposed apublic-key cryptosystembased on the hardness of theLWEproblem. The cryptosystem as well as the proof of security and correctness are completely classical. The system is characterized bym,q{\displaystyle m,q}and a probability distributionχ{\displaystyle \chi }onT{\displaystyle \mathbb {T} }. The setting of the parameters used in proofs of correctness and security is The cryptosystem is then defined by: The proof of correctness follows from choice of parameters and some probability analysis. The proof of security is by reduction to the decision version ofLWE: an algorithm for distinguishing between encryptions (with above parameters) of0{\displaystyle 0}and1{\displaystyle 1}can be used to distinguish betweenAs,χ{\displaystyle A_{s,\chi }}and the uniform distribution overZqn×T{\displaystyle \mathbb {Z} _{q}^{n}\times \mathbb {T} } Peikert[4]proposed a system that is secure even against anychosen-ciphertext attack. The idea of using LWE and Ring LWE for key exchange was proposed and filed at the University of Cincinnati in 2011 by Jintai Ding. The idea comes from the associativity of matrix multiplications, and the errors are used to provide the security. The paper[8]appeared in 2012 after a provisional patent application was filed in 2012. The security of the protocol is proven based on the hardness of solving the LWE problem. In 2014, Peikert presented a key-transport scheme[9]following the same basic idea of Ding's, where the new idea of sending an additional 1-bit signal for rounding in Ding's construction is also used. The "new hope" implementation[10]selected for Google's post-quantum experiment,[11]uses Peikert's scheme with variation in the error distribution. A RLWE version of the classicFeige–Fiat–Shamir Identification protocolwas created and converted to adigital signaturein 2011 by Lyubashevsky. The details of this signature were extended in 2012 by Gunesyu, Lyubashevsky, and Popplemann in 2012 and published in their paper "Practical Lattice Based Cryptography – A Signature Scheme for Embedded Systems." These papers laid the groundwork for a variety of recent signature algorithms some based directly on the ring learning with errors problem and some which are not tied to the same hard RLWE problems.
https://en.wikipedia.org/wiki/Learning_with_errors
Inmachine learningandcomputer vision,M-theoryis a learning framework inspired by feed-forward processing in theventral streamofvisual cortexand originally developed for recognition and classification of objects in visual scenes. M-theory was later applied to other areas, such asspeech recognition. On certain image recognition tasks, algorithms based on a specific instantiation of M-theory, HMAX, achieved human-level performance.[1] The core principle of M-theory is extracting representations invariant under various transformations of images (translation, scale, 2D and 3D rotation and others). In contrast with other approaches using invariant representations, in M-theory they are not hardcoded into the algorithms, but learned. M-theory also shares some principles withcompressed sensing. The theory proposes multilayered hierarchical learning architecture, similar to that of visual cortex. A great challenge in visual recognition tasks is that the same object can be seen in a variety of conditions. It can be seen from different distances, different viewpoints, under different lighting, partially occluded, etc. In addition, for particular classes objects, such as faces, highly complex specific transformations may be relevant, such as changing facial expressions. For learning to recognize images, it is greatly beneficial to factor out these variations. It results in much simpler classification problem and, consequently, in great reduction ofsample complexityof the model. A simple computational experiment illustrates this idea. Two instances of a classifier were trained to distinguish images of planes from those of cars. For training and testing of the first instance, images with arbitrary viewpoints were used. Another instance received only images seen from a particular viewpoint, which was equivalent to training and testing the system on invariant representation of the images. One can see that the second classifier performed quite well even after receiving a single example from each category, while performance of the first classifier was close to random guess even after seeing 20 examples. Invariant representations has been incorporated into several learning architectures, such asneocognitrons. Most of these architectures, however, provided invariance through custom-designed features or properties of architecture itself. While it helps to take into account some sorts of transformations, such as translations, it is very nontrivial to accommodate for other sorts of transformations, such as 3D rotations and changing facial expressions. M-theory provides a framework of how such transformations can be learned. In addition to higher flexibility, this theory also suggests how human brain may have similar capabilities. Another core idea of M-theory is close in spirit to ideas from the field ofcompressed sensing. An implication fromJohnson–Lindenstrauss lemmasays that a particular number of images can be embedded into a low-dimensionalfeature spacewith the same distances between images by using random projections. This result suggests thatdot productbetween the observed image and some other image stored in memory, called template, can be used as a feature helping to distinguish the image from other images. The template need not to be anyhow related to the image, it could be chosen randomly. The two ideas outlined in previous sections can be brought together to construct a framework for learning invariant representations. The key observation is how dot product between imageI{\displaystyle I}and a templatet{\displaystyle t}behaves when image is transformed (by such transformations as translations, rotations, scales, etc.). If transformationg{\displaystyle g}is a member of aunitary groupof transformations, then the following holds: ⟨gI,t⟩=⟨I,g−1t⟩(1){\displaystyle \langle gI,t\rangle =\langle I,g^{-1}t\rangle \qquad (1)} In other words, the dot product of transformed image and a template is equal to the dot product of original image and inversely transformed template. For instance, for image rotated by 90 degrees, the inversely transformed template would be rotated by −90 degrees. Consider the set of dot products of an imageI{\displaystyle I}to all possible transformations of template:{⟨I,g′t⟩∣g′∈G}{\displaystyle \lbrace \langle I,g^{\prime }t\rangle \mid g^{\prime }\in G\rbrace }. If one applies a transformationg{\displaystyle g}toI{\displaystyle I}, the set would become{⟨gI,g′t⟩∣g′∈G}{\displaystyle \lbrace \langle gI,g^{\prime }t\rangle \mid g^{\prime }\in G\rbrace }. But because of the property (1), this is equal to{⟨I,g−1g′t⟩∣g′∈G}{\displaystyle \lbrace \langle I,g^{-1}g^{\prime }t\rangle \mid g^{\prime }\in G\rbrace }. The set{g−1g′∣g′∈G}{\displaystyle \lbrace g^{-1}g^{\prime }\mid g^{\prime }\in G\rbrace }is equal to just the set of all elements inG{\displaystyle G}. To see this, note that everyg−1g′{\displaystyle g^{-1}g^{\prime }}is inG{\displaystyle G}due to the closure property ofgroups, and for everyg′′{\displaystyle g^{\prime \prime }}in G there exist its prototypeg′{\displaystyle g^{\prime }}such asg′′=g−1g′{\displaystyle g^{\prime \prime }=g^{-1}g^{\prime }}(namely,g′=gg′′{\displaystyle g^{\prime }=gg^{\prime \prime }}). Thus,{⟨I,g−1g′t⟩∣g′∈G}={⟨I,g′′t⟩∣g′′∈G}{\displaystyle \lbrace \langle I,g^{-1}g^{\prime }t\rangle \mid g^{\prime }\in G\rbrace =\lbrace \langle I,g^{\prime \prime }t\rangle \mid g^{\prime \prime }\in G\rbrace }. One can see that the set of dot products remains the same despite that a transformation was applied to the image! This set by itself may serve as a (very cumbersome) invariant representation of an image. More practical representations can be derived from it. In the introductory section, it was claimed that M-theory allows to learn invariant representations. This is because templates and their transformed versions can be learned from visual experience – by exposing the system to sequences of transformations of objects. It is plausible that similar visual experiences occur in early period of human life, for instance when infants twiddle toys in their hands. Because templates may be totally unrelated to images that the system later will try to classify, memories of these visual experiences may serve as a basis for recognizing many different kinds of objects in later life. However, as it is shown later, for some kinds of transformations, specific templates are needed. To implement the ideas described in previous sections, one need to know how to derive a computationally efficient invariant representation of an image. Such unique representation for each image can be characterized as it appears by a set of one-dimensional probability distributions (empirical distributions of the dot-products between image and a set of templates stored during unsupervised learning). These probability distributions in their turn can be described by either histograms or a set of statistical moments of it, as it will be shown below. OrbitOI{\displaystyle O_{I}}is a set of imagesgI{\displaystyle gI}generated from a single imageI{\displaystyle I}under the action of the groupG,∀g∈G{\displaystyle G,\forall g\in G}. In other words, images of an object and of its transformations correspond to an orbitOI{\displaystyle O_{I}}. If two orbits have a point in common they are identical everywhere,[2]i.e. an orbit is an invariant and unique representation of an image. So, two images are called equivalent when they belong to the same orbit:I∼I′{\displaystyle I\sim I^{\prime }}if∃g∈G{\displaystyle \exists g\in G}such thatI′=gI{\displaystyle I^{\prime }=gI}. Conversely, two orbits are different if none of the images in one orbit coincide with any image in the other.[3] A natural question arises: how can one compare two orbits? There are several possible approaches. One of them employs the fact that intuitively two empirical orbits are the same irrespective of the ordering of their points. Thus, one can consider a probability distributionPI{\displaystyle P_{I}}induced by the group's action on imagesI{\displaystyle I}(gI{\displaystyle gI}can be seen as a realization of a random variable). This probability distributionPI{\displaystyle P_{I}}can be almost uniquely characterized byK{\displaystyle K}one-dimensional probability distributionsP⟨I,tk⟩{\displaystyle P_{\langle I,t^{k}\rangle }}induced by the (one-dimensional) results of projections⟨I,tk⟩{\displaystyle \langle I,t^{k}\rangle }, wheretk,k=1,…,K{\displaystyle t^{k},k=1,\ldots ,K}are a set of templates (randomly chosen images) (based on the Cramer–Wold theorem[4]and concentration of measures). Considern{\displaystyle n}imagesXn∈X{\displaystyle X_{n}\in X}. LetK≥2cε2log⁡nδ{\displaystyle K\geq {\frac {2}{c\varepsilon ^{2}}}\log {\frac {n}{\delta }}}, wherec{\displaystyle c}is a universal constant. Then with probability1−δ2{\displaystyle 1-\delta ^{2}}, for allI,I′{\displaystyle I,I^{\prime }}∈{\displaystyle \in }Xn{\displaystyle X_{n}}. This result (informally) says that an approximately invariant and unique representation of an imageI{\displaystyle I}can be obtained from the estimates ofK{\displaystyle K}1-D probability distributionsP⟨I,tk⟩{\displaystyle P_{\langle I,t^{k}\rangle }}fork=1,…,K{\displaystyle k=1,\ldots ,K}. The numberK{\displaystyle K}of projections needed to discriminaten{\displaystyle n}orbits, induced byn{\displaystyle n}images, up to precisionε{\displaystyle \varepsilon }(and with confidence1−δ2{\displaystyle 1-\delta ^{2}}) isK≥2cε2log⁡nδ{\displaystyle K\geq {\frac {2}{c\varepsilon ^{2}}}\log {\frac {n}{\delta }}}, wherec{\displaystyle c}is a universal constant. To classify an image, the following "recipe" can be used: Estimates of such one-dimensional probability density functions (PDFs)P⟨I,tk⟩{\displaystyle P_{\langle I,t^{k}\rangle }}can be written in terms of histograms asμnk(I)=1/|G|∑i=1|G|ηn(⟨I,gitk⟩){\displaystyle \mu _{n}^{k}(I)=1/\left|G\right|\sum _{i=1}^{\left|G\right|}\eta _{n}(\langle I,g_{i}t^{k}\rangle )}, whereηn,n=1,…,N{\displaystyle \eta _{n},n=1,\ldots ,N}is a set of nonlinear functions. These 1-D probability distributions can be characterized with N-bin histograms or set of statistical moments. For example, HMAX represents an architecture in which pooling is done with a max operation. In the "recipe" for image classification, groups of transformations are approximated with finite number of transformations. Such approximation is possible only when the group iscompact. Such groups as all translations and all scalings of the image are not compact, as they allow arbitrarily big transformations. However, they arelocally compact. For locally compact groups, invariance is achievable within certain range of transformations.[2] Assume thatG0{\displaystyle G_{0}}is a subset of transformations fromG{\displaystyle G}for which the transformed patterns exist in memory. For an imageI{\displaystyle I}and templatetk{\displaystyle t_{k}}, assume that⟨I,g−1tk⟩{\displaystyle \langle I,g^{-1}t_{k}\rangle }is equal to zero everywhere except some subset ofG0{\displaystyle G_{0}}. This subset is calledsupportof⟨I,g−1tk⟩{\displaystyle \langle I,g^{-1}t_{k}\rangle }and denoted assupp⁡(⟨I,g−1tk⟩){\displaystyle \operatorname {supp} (\langle I,g^{-1}t_{k}\rangle )}. It can be proven that if for a transformationg′{\displaystyle g^{\prime }}, support set will also lie withing′G0{\displaystyle g^{\prime }G_{0}}, then signature ofI{\displaystyle I}is invariant with respect tog′{\displaystyle g^{\prime }}.[2]This theorem determines the range of transformations for which invariance is guaranteed to hold. One can see that the smaller issupp⁡(⟨I,g−1tk⟩){\displaystyle \operatorname {supp} (\langle I,g^{-1}t_{k}\rangle )}, the larger is the range of transformations for which invariance is guaranteed to hold. It means that for a group that is only locally compact, not all templates would work equally well anymore. Preferable templates are those with a reasonably smallsupp⁡(⟨gI,tk⟩){\displaystyle \operatorname {supp} (\langle gI,t_{k}\rangle )}for a generic image. This property is called localization: templates are sensitive only to images within a small range of transformations. Although minimizingsupp⁡(⟨gI,tk⟩){\displaystyle \operatorname {supp} (\langle gI,t_{k}\rangle )}is not absolutely necessary for the system to work, it improves approximation of invariance. Requiring localization simultaneously for translation and scale yields a very specific kind of templates:Gabor functions.[2] The desirability of custom templates for non-compact group is in conflict with the principle of learning invariant representations. However, for certain kinds of regularly encountered image transformations, templates might be the result of evolutionary adaptations. Neurobiological data suggests that there is Gabor-like tuning in the first layer of visual cortex.[5]The optimality of Gabor templates for translations and scales is a possible explanation of this phenomenon. Many interesting transformations of images do not form groups. For instance, transformations of images associated with 3D rotation of corresponding 3D object do not form a group, because it is impossible to define an inverse transformation (two objects may looks the same from one angle but different from another angle). However, approximate invariance is still achievable even for non-group transformations, if localization condition for templates holds and transformation can be locally linearized. As it was said in the previous section, for specific case of translations and scaling, localization condition can be satisfied by use of generic Gabor templates. However, for general case (non-group) transformation, localization condition can be satisfied only for specific class of objects.[2]More specifically, in order to satisfy the condition, templates must be similar to the objects one would like to recognize. For instance, if one would like to build a system to recognize 3D rotated faces, one need to use other 3D rotated faces as templates. This may explain the existence of such specialized modules in the brain as one responsible forface recognition.[2]Even with custom templates, a noise-like encoding of images and templates is necessary for localization. It can be naturally achieved if the non-group transformation is processed on any layer other than the first in hierarchical recognition architecture. The previous section suggests one motivation for hierarchical image recognition architectures. However, they have other benefits as well. Firstly, hierarchical architectures best accomplish the goal of ‘parsing’ a complex visual scene with many objects consisting of many parts, whose relative position may greatly vary. In this case, different elements of the system must react to different objects and parts. In hierarchical architectures, representations of parts at different levels of embedding hierarchy can be stored at different layers of hierarchy. Secondly, hierarchical architectures which have invariant representations for parts of objects may facilitate learning of complex compositional concepts. This facilitation may happen through reusing of learned representations of parts that were constructed before in process of learning of other concepts. As a result, sample complexity of learning compositional concepts may be greatly reduced. Finally, hierarchical architectures have better tolerance to clutter. Clutter problem arises when the target object is in front of a non-uniform background, which functions as a distractor for the visual task. Hierarchical architecture provides signatures for parts of target objects, which do not include parts of background and are not affected by background variations.[6] In hierarchical architectures, one layer is not necessarily invariant to all transformations that are handled by the hierarchy as a whole. Some transformations may pass through that layer to upper layers, as in the case of non-group transformations described in the previous section. For other transformations, an element of the layer may produce invariant representations only within small range of transformations. For instance, elements of the lower layers in hierarchy have small visual field and thus can handle only a small range of translation. For such transformations, the layer should providecovariantrather than invariant, signatures. The property of covariance can be written asdistr⁡(⟨μl(gI),μl(t)⟩)=distr⁡(⟨μl(I),μl(g−1t)⟩){\displaystyle \operatorname {distr} (\langle \mu _{l}(gI),\mu _{l}(t)\rangle )=\operatorname {distr} (\langle \mu _{l}(I),\mu _{l}(g^{-1}t)\rangle )}, wherel{\displaystyle l}is a layer,μl(I){\displaystyle \mu _{l}(I)}is the signature of image on that layer, anddistr{\displaystyle \operatorname {distr} }stands for "distribution of values of the expression for allg∈G{\displaystyle g\in G}". M-theory is based on a quantitative theory of the ventral stream of visual cortex.[7][8]Understanding how visual cortex works in object recognition is still a challenging task for neuroscience. Humans and primates are able to memorize and recognize objects after seeing just couple of examples unlike any state-of-the art machine vision systems that usually require a lot of data in order to recognize objects. Prior to the use of visual neuroscience in computer vision has been limited to early vision for deriving stereo algorithms (e.g.,[9]) and to justify the use of DoG (derivative-of-Gaussian) filters and more recently of Gabor filters.[10][11]No real attention has been given to biologically plausible features of higher complexity. While mainstream computer vision has always been inspired and challenged by human vision, it seems to have never advanced past the very first stages of processing in the simple cells in V1 and V2. Although some of the systems inspired – to various degrees – by neuroscience, have been tested on at least some natural images, neurobiological models of object recognition in cortex have not yet been extended to deal with real-world image databases.[12] M-theory learning framework employs a novel hypothesis about the main computational function of the ventral stream: the representation of new objects/images in terms of a signature, which is invariant to transformations learned during visual experience. This allows recognition from very few labeled examples – in the limit, just one. Neuroscience suggests that natural functionals for a neuron to compute is a high-dimensional dot product between an "image patch" and another image patch (called template) which is stored in terms of synaptic weights (synapses per neuron). The standard computational model of a neuron is based on a dot product and a threshold. Another important feature of the visual cortex is that it consists of simple and complex cells. This idea was originally proposed by Hubel and Wiesel.[9]M-theory employs this idea. Simple cells compute dot products of an image and transformations of templates⟨I,gitk⟩{\displaystyle \langle I,g_{i}t^{k}\rangle }fori=1,…,|G|{\displaystyle i=1,\ldots ,|G|}(|G|{\displaystyle |G|}is a number of simple cells). Complex cells are responsible for pooling and computing empirical histograms or statistical moments of it. The following formula for constructing histogram can be computed by neurons: whereσ{\displaystyle \sigma }is a smooth version of step function,Δ{\displaystyle \Delta }is the width of a histogram bin, andn{\displaystyle n}is the number of the bin. In[clarification needed][13][14]authors applied M-theory to unconstrained face recognition in natural photographs. Unlike the DAR (detection, alignment, and recognition) method, which handles clutter by detecting objects and cropping closely around them so that very little background remains, this approach accomplishes detection and alignment implicitly by storing transformations of training images (templates) rather than explicitly detecting and aligning or cropping faces at test time. This system is built according to the principles of a recent theory of invariance in hierarchical networks and can evade the clutter problem generally problematic for feedforward systems. The resulting end-to-end system achieves a drastic improvement in the state of the art on this end-to-end task, reaching the same level of performance as the best systems operating on aligned, closely cropped images (no outside training data). It also performs well on two newer datasets, similar to LFW, but more difficult: significantly jittered (misaligned) version of LFW and SUFR-W (for example, the model's accuracy in the LFW "unaligned & no outside data used" category is 87.55±1.41% compared to state-of-the-art APEM (adaptive probabilistic elastic matching): 81.70±1.78%). The theory was also applied to a range of recognition tasks: from invariant single object recognition in clutter to multiclass categorization problems on publicly available data sets (CalTech5, CalTech101, MIT-CBCL) and complex (street) scene understanding tasks that requires the recognition of both shape-based as well as texture-based objects (on StreetScenes data set).[12]The approach performs really well: It has the capability of learning from only a few training examples and was shown to outperform several more complex state-of-the-art systems constellation models, the hierarchical SVM-based face-detection system. A key element in the approach is a new set of scale and position-tolerant feature detectors, which are biologically plausible and agree quantitatively with the tuning properties of cells along the ventral stream of visual cortex. These features are adaptive to the training set, though we also show that a universal feature set, learned from a set of natural images unrelated to any categorization task, likewise achieves good performance. This theory can also be extended for the speech recognition domain. As an example, in[15]an extension of a theory for unsupervised learning of invariant visual representations to the auditory domain and empirically evaluated its validity for voiced speech sound classification was proposed. Authors empirically demonstrated that a single-layer, phone-level representation, extracted from base speech features, improves segment classification accuracy and decreases the number of training examples in comparison with standard spectral and cepstral features for an acoustic classification task on TIMIT dataset.[16]
https://en.wikipedia.org/wiki/M-Theory_(learning_framework)
Machine learning control(MLC) is a subfield ofmachine learning,intelligent control, andcontrol theorywhich aims to solveoptimal controlproblems with machine learning methods. Key applications are complex nonlinear systems for whichlinear control theorymethods are not applicable. Four types of problems are commonly encountered: Adaptive Dynamic Programming (ADP), also known as approximate dynamic programming or neuro-dynamic programming, is a machine learning control method that combines reinforcement learning with dynamic programming to solve optimal control problems for complex systems. ADP addresses the "curse of dimensionality" in traditional dynamic programming by approximating value functions or control policies using parametric structures such as neural networks. The core idea revolves around learning a control policy that minimizes a long-term cost functionJ{\displaystyle J}, defined asJ(x(t))=∫t∞e−γ(τ−t)r(x(τ),u(τ))dτ{\displaystyle J(x(t))=\int _{t}^{\infty }e^{-\gamma (\tau -t)}r(x(\tau ),u(\tau ))\,d\tau }, wherex{\displaystyle x}is the system state,u{\displaystyle u}is the control input,r{\displaystyle r}is the instantaneous reward, andγ{\displaystyle \gamma }is a discount factor. ADP employs two interacting components: a critic that estimates the value functionV(x)≈J(x){\displaystyle V(x)\approx J(x)}, and an actor that updates the control policyu(x){\displaystyle u(x)}. The critic and actor are trained iteratively using temporal difference learning or gradient descent to satisfy theHamilton-Jacobi-Bellman (HJB) equation: minu(r(x,u)+∂V∂xf(x,u))=0,{\displaystyle \min _{u}\left(r(x,u)+{\frac {\partial V}{\partial x}}f(x,u)\right)=0,} wheref(x,u){\displaystyle f(x,u)}describes the system dynamics. Key variants include heuristic dynamic programming (HDP), dual heuristic programming (DHP), and globalized dual heuristic programming (GDHP).[7] ADP has been applied to robotics, power systems, and autonomous vehicles, offering a data-driven framework for near-optimal control without requiring full system models. Challenges remain in ensuring stability guarantees and convergence for general nonlinear systems. MLC has been successfully applied to many nonlinear control problems, exploring unknown and often unexpected actuation mechanisms. Example applications include: Many more engineering MLC application are summarized in the review article of PJ Fleming & RC Purshouse (2002).[12] As is the case for all general nonlinear methods, MLC does not guarantee convergence,optimality, orrobustnessfor a range of operating conditions.
https://en.wikipedia.org/wiki/Machine_learning_control
Machine learning in bioinformaticsis the application ofmachine learningalgorithms tobioinformatics,[1]includinggenomics,proteomics,microarrays,systems biology,evolution, andtext mining.[2][3] Prior to the emergence of machine learning, bioinformatics algorithms had to be programmed by hand; for problems such asprotein structure prediction, this proved difficult.[4]Machine learning techniques such asdeep learningcanlearn featuresof data sets rather than requiring the programmer to define them individually. The algorithm can further learn how to combine low-levelfeaturesinto more abstract features, and so on. This multi-layered approach allows such systems to make sophisticated predictions when appropriately trained. These methods contrast with othercomputational biologyapproaches which, while exploiting existing datasets, do not allow the data to be interpreted and analyzed in unanticipated ways. Machine learning algorithms in bioinformatics can be used for prediction, classification, and feature selection. Methods to achieve this task are varied and span many disciplines; most well known among them are machine learning and statistics. Classification and prediction tasks aim at building models that describe and distinguish classes or concepts for future prediction. The differences between them are the following: Due to the exponential growth of information technologies and applicable models, including artificial intelligence and data mining, in addition to the access ever-more comprehensive data sets, new and better information analysis techniques have been created, based on their ability to learn. Such models allow reach beyond description and provide insights in the form of testable models. Artificial neural networksin bioinformatics have been used for:[5] The way that features, often vectors in a many-dimensional space, are extracted from the domain data is an important component of learning systems.[6]In genomics, a typical representation of a sequence is a vector ofk-mersfrequencies, which is a vector of dimension4k{\displaystyle 4^{k}}whose entries count the appearance of each subsequence of lengthk{\displaystyle k}in a given sequence. Since for a value as small ask=12{\displaystyle k=12}the dimensionality of these vectors is huge (e.g. in this case the dimension is412≈16×106{\displaystyle 4^{12}\approx 16\times 10^{6}}), techniques such asprincipal component analysisare used to project the data to a lower dimensional space, thus selecting a smaller set of features from the sequences.[6][7] In this type of machine learning task, the output is a discrete variable. One example of this type of task in bioinformatics is labeling new genomic data (such as genomes of unculturable bacteria) based on a model of already labeled data.[6] Hidden Markov models(HMMs) are a class ofstatistical modelsfor sequential data (often related to systems evolving over time). An HMM is composed of two mathematical objects: an observed state‐dependent processX1,X2,…,XM{\displaystyle X_{1},X_{2},\ldots ,X_{M}}, and an unobserved (hidden) state processS1,S2,…,ST{\displaystyle S_{1},S_{2},\ldots ,S_{T}}. In an HMM, the state process is not directly observed – it is a 'hidden' (or 'latent') variable – but observations are made of a state‐dependent process (or observation process) that is driven by the underlying state process (and which can thus be regarded as a noisy measurement of the system states of interest).[8]HMMs can be formulated in continuous time.[9][10] HMMs can be used to profile and convert a multiple sequence alignment into a position-specific scoring system suitable for searching databases for homologous sequences remotely.[11]Additionally, ecological phenomena can be described by HMMs.[12] Convolutional neural networks(CNN) are a class ofdeep neural networkwhose architecture is based on shared weights of convolution kernels or filters that slide along input features, providing translation-equivariant responses known as feature maps.[13][14]CNNs take advantage of the hierarchical pattern in data and assemble patterns of increasing complexity using smaller and simpler patterns discovered via their filters.[15] Convolutional networks wereinspiredbybiologicalprocesses[16][17][18][19]in that the connectivity pattern betweenneuronsresembles the organization of the animalvisual cortex. Individualcortical neuronsrespond to stimuli only in a restricted region of thevisual fieldknown as thereceptive field. The receptive fields of different neurons partially overlap such that they cover the entire visual field. CNN uses relatively little pre-processing compared to otherimage classification algorithms. This means that the network learns to optimize thefilters(or kernels) through automated learning, whereas in traditional algorithms these filters arehand-engineered. This reduced reliance on prior knowledge of the analyst and on human intervention in manual feature extraction makes CNNs a desirable model.[15] A phylogenetic convolutional neural network (Ph-CNN) is aconvolutional neural networkarchitecture proposed by Fioranti et al. in 2018 to classifymetagenomicsdata.[20]In this approach, phylogenetic data is endowed with patristic distance (the sum of the lengths of all branches connecting twooperational taxonomic units[OTU]) to select k-neighborhoods for each OTU, and each OTU and its neighbors are processed with convolutional filters. Unlike supervised methods,self-supervised learningmethods learn representations without relying on annotated data. That is well-suited for genomics, wherehigh throughput sequencingtechniques can create potentially large amounts of unlabeled data. Some examples of self-supervised learning methods applied on genomics include DNABERT and Self-GenomeNet.[21][22] Random forests(RF) classify by constructing an ensemble ofdecision trees, and outputting the average prediction of the individual trees.[23]This is a modification ofbootstrap aggregating(which aggregates a large collection of decision trees) and can be used forclassificationorregression.[24][25] As random forests give an internal estimate of generalization error, cross-validation is unnecessary. In addition, they produce proximities, which can be used to impute missing values, and which enable novel data visualizations.[26] Computationally, random forests are appealing because they naturally handle both regression and (multiclass) classification, are relatively fast to train and to predict, depend only on one or two tuning parameters, have a built-in estimate of the generalization error, can be used directly for high-dimensional problems, and can easily be implemented in parallel. Statistically, random forests are appealing for additional features, such as measures of variable importance, differential class weighting, missing value imputation, visualization, outlier detection, and unsupervised learning.[26] Clustering- the partitioning of a data set into disjoint subsets, so that the data in each subset are as close as possible to each other and as distant as possible from data in any other subset, according to some defineddistanceorsimilarityfunction - is a common technique for statistical data analysis. Clustering is central to much data-driven bioinformatics research and serves as a powerful computational method whereby means of hierarchical, centroid-based, distribution-based, density-based, and self-organizing maps classification, has long been studied and used in classical machine learning settings. Particularly, clustering helps to analyze unstructured and high-dimensional data in the form of sequences, expressions, texts, images, and so on. Clustering is also used to gain insights into biological processes at thegenomiclevel, e.g. gene functions, cellular processes, subtypes of cells,gene regulation, and metabolic processes.[27] Data clustering algorithms can be hierarchical or partitional. Hierarchical algorithms find successive clusters using previously established clusters, whereas partitional algorithms determine all clusters at once. Hierarchical algorithms can be agglomerative (bottom-up) or divisive (top-down). Agglomerative algorithms begin with each element as a separate cluster and merge them in successively larger clusters. Divisive algorithms begin with the whole set and proceed to divide it into successively smaller clusters.Hierarchical clusteringis calculated using metrics onEuclidean spaces, the most commonly used is theEuclidean distancecomputed by finding the square of the difference between each variable, adding all the squares, and finding the square root of the said sum. An example of ahierarchical clusteringalgorithm isBIRCH, which is particularly good on bioinformatics for its nearly lineartime complexitygiven generally large datasets.[28]Partitioning algorithms are based on specifying an initial number of groups, and iteratively reallocating objects among groups to convergence. This algorithm typically determines all clusters at once. Most applications adopt one of two popular heuristic methods:k-meansalgorithm ork-medoids. Other algorithms do not require an initial number of groups, such asaffinity propagation. In a genomic setting this algorithm has been used both to cluster biosynthetic gene clusters in gene cluster families(GCF) and to cluster said GCFs.[29] Typically, a workflow for applying machine learning to biological data goes through four steps:[2] In general, a machine learning system can usually be trained to recognize elements of a certain class given sufficient samples.[31]For example, machine learning methods can be trained to identify specific visual features such as splice sites.[32] Support vector machineshave been extensively used in cancer genomic studies.[33]In addition,deep learninghas been incorporated into bioinformatic algorithms. Deep learning applications have been used for regulatory genomics and cellular imaging.[34]Other applications include medical image classification, genomic sequence analysis, as well as protein structure classification and prediction.[35]Deep learning has been applied to regulatory genomics, variant calling and pathogenicity scores.[36]Natural language processingandtext mininghave helped to understand phenomena including protein-protein interaction, gene-disease relation as well as predicting biomolecule structures and functions.[37] Natural language processingalgorithmspersonalized medicinefor patients who suffer genetic diseases, by combining the extraction of clinical information and genomic data available from the patients. Institutes such as Health-funded Pharmacogenomics Research Network focus on finding breast cancer treatments.[38] Precision medicineconsiders individual genomic variability, enabled by large-scale biological databases. Machine learning can be applied to perform the matching function between (groups of patients) and specific treatment modalities.[39] Computational techniques are used to solve other problems, such as efficient primer design forPCR, biological-image analysis and back translation of proteins (which is, given the degeneration of the genetic code, a complex combinatorial problem).[2] While genomic sequence data has historically been sparse due to the technical difficulty of sequencing a piece of DNA, the number of available sequences is growing. On average, the number ofbasesavailable in theGenBankpublic repository has doubled every 18 months since 1982.[40]However, whileraw datawas becoming increasingly available and accessible, As of 2002[update], biological interpretation of this data was occurring at a much slower pace.[41]This made for an increasing need for developingcomputational genomicstools, including machine learning systems, that can automatically determine the location of protein-encoding genes within a given DNA sequence (i.e.gene prediction).[41] Gene prediction is commonly performed through bothextrinsic searchesandintrinsic searches.[41]For the extrinsic search, the input DNA sequence is run through a large database of sequences whose genes have been previously discovered and their locations annotated and identifying the target sequence's genes by determining which strings of bases within the sequence arehomologousto known gene sequences. However, not all the genes in a given input sequence can be identified through homology alone, due to limits in the size of the database of known and annotated gene sequences. Therefore, an intrinsic search is needed where a gene prediction program attempts to identify the remaining genes from the DNA sequence alone.[41] Machine learning has also been used for the problem ofmultiple sequence alignmentwhich involves aligning many DNA or amino acid sequences in order to determine regions of similarity that could indicate a shared evolutionary history.[2]It can also be used to detect and visualize genome rearrangements.[42] Proteins, strings ofamino acids, gain much of their function fromprotein folding, where they conform into a three-dimensional structure, including theprimary structure, thesecondary structure(alpha helicesandbeta sheets), thetertiary structure, and thequaternary structure. Protein secondary structure prediction is a main focus of this subfield as tertiary and quaternary structures are determined based on the secondary structure.[4]Solving the true structure of a protein is expensive and time-intensive, furthering the need for systems that can accurately predict the structure of a protein by analyzing the amino acid sequence directly.[4][2]Prior to machine learning, researchers needed to conduct this prediction manually. This trend began in 1951 when Pauling and Corey released their work on predicting the hydrogen bond configurations of a protein from a polypeptide chain.[43]Automatic feature learning reaches an accuracy of 82-84%.[4][44]Recent approaches have utilized deep learning techniques for state-of-the-art secondary structure predictions. For example, DeepCNF (deep convolutional neural fields) achieved an accuracy of approximately 84% when tasked to classify the amino acids of a protein sequence into one of three structural classes (helix, sheet, or coil).[44]The theoretical limit for three-state protein secondary structure is 88–90%.[4]In 2018,AlphaFold, anartificial intelligence(AI) program developed byDeepMind, placed first in the overall rankings of the 13thCritical Assessment of Structure Prediction(CASP). It was particularly successful at predicting the most accurate structures for targets rated as most difficult by the competition organizers, where no existingtemplate structureswere available fromproteinswith partially similar sequences. AlphaFold 2 (2020) repeated this placement in the CASP14 competition and achieved a level of accuracy much higher than any other entry.[45][46][47] Machine learning has also been applied to proteomics problems such asprotein side-chainprediction,protein loopmodeling, andprotein contact mapprediction.[2] Metagenomicsis the study of microbial communities from environmental DNA samples.[48]Currently, limitations and challenges predominate in the implementation of machine learning tools due to the amount of data in environmental samples.[49]Supercomputers and web servers have made access to these tools easier.[50]The high dimensionality of microbiome datasets is a major challenge in studying the microbiome; this significantly limits the power of current approaches for identifying true differences and increases the chance of false discoveries.[51][better source needed] Despite their importance, machine learning tools related to metagenomics have focused on the study of gut microbiota and the relationship with digestive diseases, such asinflammatory bowel disease(IBD),Clostridioides difficileinfection (CDI),colorectal canceranddiabetes, seeking better diagnosis and treatments.[50]Many algorithms were developed to classify microbial communities according to the health condition of the host, regardless of the type of sequence data, e.g.16S rRNAorwhole-genome sequencing(WGS), using methods such as least absolute shrinkage and selection operator classifier,random forest, supervised classification model, and gradient boosted tree model.Neural networks, such asrecurrent neural networks(RNN),convolutional neural networks(CNN), andHopfield neural networkshave been added.[50]For example, in 2018, Fioravanti et al. developed an algorithm called Ph-CNN to classify data samples from healthy patients and patients with IBD symptoms (to distinguish healthy and sick patients) by using phylogenetic trees and convolutional neural networks.[52] In addition,random forest(RF) methods and implemented importance measures help in the identification of microbiome species that can be used to distinguish diseased and non-diseased samples. However, the performance of a decision tree and the diversity of decision trees in the ensemble significantly influence the performance of RF algorithms. The generalization error for RF measures how accurate the individual classifiers are and their interdependence. Therefore, the high dimensionality problems of microbiome datasets pose challenges. Effective approaches require many possible variable combinations, which exponentially increases the computational burden as the number of features increases.[51] For microbiome analysis in 2020 Dang & Kishino[51]developed a novel analysis pipeline. The core of the pipeline is an RF classifier coupled with forwardingvariable selection(RF-FVS), which selects a minimum-size core set of microbial species or functional signatures that maximize the predictive classifier performance. The framework combines: They demonstrated performance by analyzing two published datasets from large-scale case-control studies: The proposed approach improved the accuracy from 81% to 99.01% for CDI and from 75.14% to 90.17% for CRC. The use of machine learning in environmental samples has been less explored, maybe because of data complexity, especially from WGS. Some works show that it is possible to apply these tools in environmental samples. In 2021 Dhungel et al.,[53]designed an R package called MegaR. This package allows working with 16S rRNA and whole metagenomic sequences to make taxonomic profiles and classification models by machine learning models. MegaR includes a comfortable visualization environment to improve the user experience. Machine learning in environmental metagenomics can help to answer questions related to the interactions between microbial communities and ecosystems, e.g. the work of Xun et al., in 2021[54]where the use of different machine learning methods offered insights on the relationship among the soil, microbiome biodiversity, and ecosystem stability. Microarrays, a type oflab-on-a-chip, are used for automatically collecting data about large amounts of biological material. Machine learning can aid in analysis, and has been applied to expression pattern identification, classification, and genetic network induction.[2] This technology is especially useful for monitoring gene expression, aiding in diagnosing cancer by examining which genes are expressed.[55]One of the main tasks is identifying which genes are expressed based on the collected data.[2]In addition, due to the huge number of genes on which data is collected by the microarray, winnowing the large amount of irrelevant data to the task of expressed gene identification is challenging. Machine learning presents a potential solution as various classification methods can be used to perform this identification. The most commonly used methods areradial basis function networks,deep learning,Bayesian classification,decision trees, andrandom forest.[55] Systems biology focuses on the study of emergent behaviors from complex interactions of simple biological components in a system. Such components can include DNA, RNA, proteins, and metabolites.[56] Machine learning has been used to aid in modeling these interactions in domains such as genetic networks, signal transduction networks, and metabolic pathways.[2]Probabilistic graphical models, a machine learning technique for determining the relationship between different variables, are one of the most commonly used methods for modeling genetic networks.[2]In addition, machine learning has been applied to systems biology problems such as identifyingtranscription factor binding sitesusingMarkov chain optimization.[2]Genetic algorithms, machine learning techniques which are based on the natural process of evolution, have been used to model genetic networks and regulatory structures.[2] Other systems biology applications of machine learning include the task of enzyme function prediction, high throughput microarray data analysis, analysis of genome-wide association studies to better understand markers of disease, protein function prediction.[57] This domain, particularlyphylogenetic treereconstruction, uses the features of machine learning techniques. Phylogenetic trees are schematic representations of the evolution of organisms. Initially, they were constructed using features such as morphological and metabolic features. Later, due to the availability of genome sequences, the construction of the phylogenetic tree algorithm used the concept based on genome comparison. With the help of optimization techniques, a comparison was done by means of multiple sequence alignment.[58] Machine learning methods for the analysis ofneuroimagingdata are used to help diagnosestroke. Historically multiple approaches to this problem involved neural networks.[59][60] Multiple approaches to detect strokes used machine learning. As proposed by Mirtskhulava,[61]feed-forward networks were tested to detect strokes using neural imaging. As proposed by Titano[62]3D-CNN techniques were tested in supervised classification to screen head CT images for acute neurologic events. Three-dimensionalCNNandSVMmethods are often used.[60] The increase in biological publications increased the difficulty in searching and compiling relevant available information on a given topic. This task is known asknowledge extraction. It is necessary for biological data collection which can then in turn be fed into machine learning algorithms to generate new biological knowledge.[2][63]Machine learning can be used for this knowledge extraction task using techniques such asnatural language processingto extract the useful information from human-generated reports in a database.Text Nailing, an alternative approach to machine learning, capable of extracting features from clinical narrative notes was introduced in 2017. This technique has been applied to the search for novel drug targets, as this task requires the examination of information stored in biological databases and journals.[63]Annotations of proteins in protein databases often do not reflect the complete known set of knowledge of each protein, so additional information must be extracted from biomedical literature. Machine learning has been applied to the automatic annotation of gene and protein function, determination of theprotein subcellular localization,DNA-expression arrayanalysis, large-scaleprotein interactionanalysis, and molecule interaction analysis.[63] Another application of text mining is the detection and visualization of distinct DNA regions given sufficient reference data.[64] Microbial communities are complex assembles of diverse microorganisms,[65]where symbiont partners constantly produce diverse metabolites derived from the primary and secondary (specialized) metabolism, from which metabolism plays an important role in microbial interaction.[66]Metagenomic and metatranscriptomic data are an important source for deciphering communications signals. Molecular mechanisms produce specialized metabolites in various ways.Biosynthetic Gene Clusters(BGCs) attract attention, since several metabolites are clinically valuable, anti-microbial, anti-fungal, anti-parasitic, anti-tumor and immunosuppressive agents produced by the modular action of multi-enzymatic, multi-domains gene clusters, such asNonribosomal peptidesynthetases (NRPSs) andpolyketide synthases(PKSs).[67]Diverse studies[68][69][70][71][72][73][74][75]show that grouping BGCs that share homologous core genes into gene cluster families (GCFs) can yield useful insights into the chemical diversity of the analyzed strains, and can support linking BGCs to their secondary metabolites.[69][71]GCFs have been used as functional markers in human health studies[76][77]and to study the ability of soil to suppress fungal pathogens.[78]Given their direct relationship to catalytic enzymes, and compounds produced from their encoded pathways, BGCs/GCFs can serve as a proxy to explore the chemical space of microbial secondary metabolism. Cataloging GCFs in sequenced microbial genomes yields an overview of the existing chemical diversity and offers insights into future priorities.[68][70]Tools such as BiG-SLiCE and BIG-MAP[79]have emerged with the sole purpose of unveiling the importance of BGCs in natural environments. The increase of experimentally characterizedribosomally synthesized and post-translationally modified peptides(RiPPs), together with the availability of information on their sequence and chemical structure, selected from databases such as BAGEL, BACTIBASE, MIBIG, and THIOBASE, provide the opportunity to develop machine learning tools to decode the chemical structure and classify them. In 2017, researchers at the National Institute of Immunology of New Delhi, India, developed RiPPMiner[80]software, a bioinformatics resource for decoding RiPP chemical structures by genome mining. The RiPPMiner web server consists of a query interface and the RiPPDB database. RiPPMiner defines 12 subclasses of RiPPs, predicting the cleavage site of the leader peptide and the final cross-link of the RiPP chemical structure. Many tandem mass spectrometry(MS/MS)based metabolomics studies, such as library matching and molecular networking, use spectral similarity as a proxy for structural similarity. Spec2vec[81]algorithm provides a new way of spectral similarity score, based onWord2Vec. Spec2Vec learns fragmental relationships within a large set of spectral data, in order to assess spectral similarities between molecules and to classify unknown molecules through these comparisons. For systemic annotation, some metabolomics studies rely on fitting measured fragmentation mass spectra to library spectra or contrasting spectra via network analysis. Scoring functions are used to determine the similarity between pairs of fragment spectra as part of these processes. So far, no research has suggested scores that are significantly different from the commonly utilizedcosine-based similarity.[82] An important part of bioinformatics is the management of big datasets, known as databases of reference. Databases exist for each type of biological data, for example for biosynthetic gene clusters and metagenomes. The National Center for Biotechnology Information (NCBI)[83]provides a large suite of online resources for biological information and data, including theGenBanknucleic acid sequence database and thePubMeddatabase of citations and abstracts for published life science journals. Augmenting many of the Web applications are custom implementations of the BLAST program optimized to search specialized data sets. Resources include PubMed Data Management, RefSeq Functional Elements, genome data download, variation services API, Magic-BLAST, QuickBLASTp, and Identical Protein Groups. All of these resources can be accessed through NCBI.[84] antiSMASH allows the rapid genome-wide identification, annotation and analysis of secondary metabolite biosynthesis gene clusters in bacterial and fungal genomes. It integrates and cross-links with a large number of in silicosecondary metaboliteanalysis tools.[85] gutSMASH is a tool that systematically evaluates bacterial metabolic potential by predicting both known and novelanaerobicmetabolic gene clusters (MGCs) from the gutmicrobiome. MIBiG,[86]the minimum information about a biosynthetic gene cluster specification, provides a standard for annotations andmetadataon biosynthetic gene clusters and their molecular products. MIBiG is a Genomic Standards Consortium project that builds on the minimum information about any sequence (MIxS) framework.[87] MIBiG facilitates the standardized deposition and retrieval of biosynthetic gene cluster data as well as the development of comprehensive comparative analysis tools. It empowers next-generation research on the biosynthesis, chemistry and ecology of broad classes of societally relevant bioactivesecondary metabolites, guided by robust experimental evidence and rich metadata components.[88] SILVA[89]is an interdisciplinary project among biologists and computers scientists assembling a complete database of RNA ribosomal (rRNA) sequences of genes, both small (16S,18S, SSU) and large (23S,28S, LSU) subunits, which belong to the bacteria, archaea and eukarya domains. These data are freely available for academic and commercial use.[90] Greengenes[91]is a full-length16S rRNAgene database that provides chimera screening, standard alignment and a curated taxonomy based on de novo tree inference.[92][93]Overview: Open Tree of Life Taxonomy (OTT)[94]aims to build a complete, dynamic, and digitally available Tree of Life by synthesizing published phylogenetic trees along with taxonomic data. Phylogenetic trees have been classified, aligned, and merged. Taxonomies have been used to fill in sparse regions and gaps left by phylogenies. OTT is a base that has been little used for sequencing analyzes of the 16S region, however, it has a greater number of sequences classified taxonomically down to the genus level compared to SILVA and Greengenes. However, in terms of classification at the edge level, it contains a lesser amount of information[95] Ribosomal Database Project (RDP)[96]is a database that provides RNA ribosomal (rRNA) sequences of small subunits of domain bacterial and archaeal (16S); and fungal rRNA sequences of large subunits (28S).[97]
https://en.wikipedia.org/wiki/Machine_learning_in_bioinformatics
Inmachine learning, themarginof a single data point is defined to be the distance from the data point to adecision boundary. Note that there are many distances and decision boundaries that may be appropriate for certain datasets and goals. Amargin classifieris aclassificationmodel that utilizes the margin of each example to learn such classification. There are theoretical justifications (based on theVC dimension) as to why maximizing the margin (under some suitable constraints) may be beneficial for machine learning and statistical inference algorithms. For a given dataset, there may be many hyperplanes that could classify it. One reasonable choice as the best hyperplane is the one that represents the largest separation, or margin, between the classes. Hence, one should choose the hyperplane such that the distance from it to the nearest data point on each side is maximized. If such a hyperplane exists, it is known as themaximum-margin hyperplane, and the linear classifier it defines is known as amaximummargin classifier(or, equivalently, theperceptronof optimal stability).[citation needed] Thiscomputer-programming-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Margin_(machine_learning)
AMarkov logic network(MLN) is aprobabilistic logicwhich applies the ideas of aMarkov networktofirst-order logic, defining probability distributions onpossible worldson any givendomain. In 2002,Ben Taskar,Pieter AbbeelandDaphne Kollerintroduced relational Markov networks as templates to specifyMarkov networksabstractly and without reference to a specificdomain.[1][2]Work on Markov logic networks began in 2003 byPedro Domingosand Matt Richardson.[3][4]Markov logic networks is a popular formalism forstatistical relational learning.[5] A Markov logic network consists of a collection offormulasfromfirst-order logic, to each of which is assigned areal number, the weight. The underlying idea is that an interpretation is more likely if it satisfies formulas with positive weights and less likely if it satisfies formulas with negative weights.[6] For instance, the following Markov logic network codifies how smokers are more likely to be friends with other smokers, and how stress encourages smoking:[7]2.0::smokes(X)←smokes(Y)∧influences(X,Y)0.5::smokes(X)←stress(X){\displaystyle {\begin{array}{lcl}2.0&::&\mathrm {smokes} (X)\leftarrow \mathrm {smokes} (Y)\land \mathrm {influences} (X,Y)\\0.5&::&\mathrm {smokes} (X)\leftarrow \mathrm {stress} (X)\end{array}}} Together with a given domain, a Markov logic network defines aprobability distributionon the set of allinterpretationsof its predicates on the given domain. The underlying idea is that an interpretation is more likely if it satisfies formulas with positive weights and less likely if it satisfies formulas with negative weights. For anyn{\displaystyle n}-arypredicate symbolR{\displaystyle R}that occurs in the Markov logic network and everyn{\displaystyle n}-tuplea1,…,an{\displaystyle a_{1},\dots ,a_{n}}of domain elements,R(a1,…,an){\displaystyle R(a_{1},\dots ,a_{n})}is agroundingofR{\displaystyle R}. Aninterpretationis given by allocating aBooleantruth value(trueorfalse) to each grounding of an element. Atrue groundingof a formulaφ{\displaystyle \varphi }in an interpretation with free variablesx1,…,xn{\displaystyle x_{1},\dots ,x_{n}}is avariable assignmentofx1,…,xn{\displaystyle x_{1},\dots ,x_{n}}that makesφ{\displaystyle \varphi }true in that interpretation. Then the probability of any given interpretation isdirectly proportionaltoexp⁡(∑jwjnj){\displaystyle \exp(\sum _{j}w_{j}n_{j})}, wherewj{\displaystyle w_{j}}is the weight of thej{\displaystyle j}-th sentence of the Markov logic network andnj{\displaystyle n_{j}}is the number of its true groundings.[1][4] This can also be seen as inducing aMarkov networkwhose nodes are the groundings of the predicates occurring in the Markov logic network. The feature functions of this network are the groundings of the sentences occurring in the Markov logic network, with valueew{\displaystyle e^{w}}if the grounding is true and 1 otherwise (where againw{\displaystyle w}is the weight of the formula).[1] The probability distributions induced by Markov logic networks can be queried for the probability of a particular event, given by an atomic formula (marginal inference), possibly conditioned by another atomic formula.[6] Marginal inference can be performed using standard Markov network inference techniques over the minimal subset of the relevant Markov network required for answering the query. Exact inference is known to be#P-complete in the size of the domain.[6] In practice, the exact probability is often approximated.[8]Techniques for approximate inference includeGibbs sampling,belief propagation, or approximation viapseudolikelihood. The class of Markov logic networks which use only two variables in any formula allows for polynomial time exact inference by reduction toweighted model counting.[9][6]
https://en.wikipedia.org/wiki/Markov_logic_network
Inprobability theory, aMarkov modelis astochastic modelused tomodelpseudo-randomly changing systems. It is assumed that future states depend only on the current state, not on the events that occurred before it (that is, it assumes theMarkov property). Generally, this assumption enables reasoning and computation with the model that would otherwise beintractable. For this reason, in the fields ofpredictive modellingandprobabilistic forecasting, it is desirable for a given model to exhibit the Markov property. Andrey Andreyevich Markov (14 June 1856 – 20 July 1922) was a Russian mathematician best known for his work on stochastic processes. A primary subject of his research later became known as the Markov chain There are four common Markov models used in different situations, depending on whether every sequential state is observable or not, and whether the system is to be adjusted on the basis of observations made: The simplest Markov model is theMarkov chain. It models the state of a system with arandom variablethat changes through time. In this context, the Markov property indicates that the distribution for this variable depends only on the distribution of a previous state. An example use of a Markov chain isMarkov chain Monte Carlo, which uses the Markov property to prove that a particular method for performing arandom walkwill sample from thejoint distribution. Ahidden Markov modelis a Markov chain for which the state is only partially observable or noisily observable. In other words, observations are related to the state of the system, but they are typically insufficient to precisely determine the state. Several well-known algorithms for hidden Markov models exist. For example, given a sequence of observations, theViterbi algorithmwill compute the most-likely corresponding sequence of states, theforward algorithmwill compute the probability of the sequence of observations, and theBaum–Welch algorithmwill estimate the starting probabilities, the transition function, and the observation function of a hidden Markov model. One common use is forspeech recognition, where the observed data is thespeech audiowaveformand the hidden state is the spoken text. In this example, the Viterbi algorithm finds the most likely sequence of spoken words given the speech audio. AMarkov decision processis a Markov chain in which state transitions depend on the current state and an action vector that is applied to the system. Typically, a Markov decision process is used to compute a policy of actions that will maximize some utility with respect to expected rewards. Apartially observable Markov decision process(POMDP) is a Markov decision process in which the state of the system is only partially observed. POMDPs are known to beNP complete, but recent approximation techniques have made them useful for a variety of applications, such as controlling simple agents or robots.[1] AMarkov random field, or Markov network, may be considered to be a generalization of a Markov chain in multiple dimensions. In a Markov chain, state depends only on the previous state in time, whereas in a Markov random field, each state depends on its neighbors in any of multiple directions. A Markov random field may be visualized as a field or graph of random variables, where the distribution of each random variable depends on the neighboring variables with which it is connected. More specifically, the joint distribution for any random variable in the graph can be computed as the product of the "clique potentials" of all the cliques in the graph that contain that random variable. Modeling a problem as a Markov random field is useful because it implies that the joint distributions at each vertex in the graph may be computed in this manner. Hierarchical Markov models can be applied to categorize human behavior at various levels of abstraction. For example, a series of simple observations, such as a person's location in a room, can be interpreted to determine more complex information, such as in what task or activity the person is performing. Two kinds of Hierarchical Markov Models are theHierarchical hidden Markov model[2]and the Abstract Hidden Markov Model.[3]Both have been used for behavior recognition[4]and certain conditional independence properties between different levels of abstraction in the model allow for faster learning and inference.[3][5] A Tolerant Markov model (TMM) is a probabilistic-algorithmic Markov chain model.[6]It assigns the probabilities according to a conditioning context that considers the last symbol, from the sequence to occur, as the most probable instead of the true occurring symbol. A TMM can model three different natures: substitutions, additions or deletions. Successful applications have been efficiently implemented in DNA sequences compression.[6][7] Markov-chains have been used as a forecasting methods for several topics, for example price trends,[8]wind power[9]andsolar irradiance.[10]The Markov-chain forecasting models utilize a variety of different settings, from discretizing the time-series[9]to hidden Markov-models combined with wavelets[8]and the Markov-chain mixture distribution model (MCM).[10]
https://en.wikipedia.org/wiki/Markov_model
Markovian discriminationis a class ofspamfiltering methods used inCRM114and other spam filters to filter based on statistical patterns oftransition probabilitiesbetweenwordsor otherlexical tokensin spam messages that would not be captured using simplebag-of-wordsnaive Bayes spam filtering.[1] A bag-of-words model contains only a dictionary of legal words and their relative probabilities in spam and genuine messages. A Markovian model additionally includes the relative transition probabilities between words in spam and in genuine messages, where the relative transition probability is the likelihood that a given word will be written next, based on what the current word is. Put another way, a bag-of-words filter discriminates based on relative probabilities of single words alone regardless of phrase structure, while a Markovian word-based filter discriminates based on relative probabilities of either pairs of words, or, more commonly, short sequences of words. This allows the Markovian filter greater sensitivity to phrase structure. Neither naive Bayes nor Markovian filters are limited to the word level for tokenizing messages. They may also process letters, partial words, or phrases as tokens. In such cases, specific bag-of-words methods would correspond to general bag-of-tokens methods. Modelers can parameterize Markovian spam filters based on the relative probabilities of any such tokens' transitions appearing in spam or in legitimate messages.[2] There are two primary classes of Markov models, visible Markov models andhidden Markov models, which differ in whether the Markov chain generating token sequences is assumed to have its states fully determined by each generated token (the visible Markov models) or might also have additional state (the hidden Markov models). With a visible Markov model, each current token is modeled as if it contains the complete information about previous tokens of the message relevant to the probability of future tokens, whereas a hidden Markov model allows for more obscure conditional relationships.[3]Since those more obscure conditional relationships are more typical of natural language messages including both genuine messages and spam, hidden Markov models are generally preferred over visible Markov models for spam filtering. Due to storage constraints, the most commonly employed model is a specific type of hidden Markov model known as aMarkov random field, typically with a 'sliding window' or clique size ranging between four and six tokens.[4]
https://en.wikipedia.org/wiki/Markovian_discrimination
Instatistics, amaximum-entropy Markov model(MEMM), orconditional Markov model(CMM), is agraphical modelforsequence labelingthat combines features ofhidden Markov models(HMMs) andmaximum entropy(MaxEnt) models. An MEMM is adiscriminative modelthat extends a standardmaximum entropy classifierby assuming that the unknown values to be learnt are connected in aMarkov chainrather than beingconditionally independentof each other. MEMMs find applications innatural language processing, specifically inpart-of-speech tagging[1]andinformation extraction.[2] Suppose we have a sequence of observationsO1,…,On{\displaystyle O_{1},\dots ,O_{n}}that we seek to tag with the labelsS1,…,Sn{\displaystyle S_{1},\dots ,S_{n}}that maximize the conditional probabilityP(S1,…,Sn∣O1,…,On){\displaystyle P(S_{1},\dots ,S_{n}\mid O_{1},\dots ,O_{n})}. In a MEMM, this probability is factored into Markov transition probabilities, where the probability of transitioning to a particular label depends only on the observation at that position and the previous position's label[citation needed]: Each of these transition probabilities comes from the same general distributionP(s∣s′,o){\displaystyle P(s\mid s',o)}. For each possible label value of the previous labels′{\displaystyle s'}, the probability of a certain labels{\displaystyle s}is modeled in the same way as amaximum entropy classifier:[3] Here, thefa(o,s){\displaystyle f_{a}(o,s)}are real-valued or categorical feature-functions, andZ(o,s′){\displaystyle Z(o,s')}is a normalization term ensuring that the distribution sums to one. This form for the distribution corresponds to themaximum entropy probability distributionsatisfying the constraint that the empirical expectation for the feature is equal to the expectation given the model: The parametersλa{\displaystyle \lambda _{a}}can be estimated usinggeneralized iterative scaling.[4]Furthermore, a variant of theBaum–Welch algorithm, which is used for training HMMs, can be used to estimate parameters when training data hasincomplete or missing labels.[2] The optimal state sequenceS1,…,Sn{\displaystyle S_{1},\dots ,S_{n}}can be found using a very similarViterbi algorithmto the one used for HMMs. The dynamic program uses the forward probability: An advantage of MEMMs rather than HMMs for sequence tagging is that they offer increased freedom in choosing features to represent observations. In sequence tagging situations, it is useful to use domain knowledge to design special-purpose features. In the original paper introducing MEMMs, the authors write that "when trying to extract previously unseen company names from a newswire article, the identity of a word alone is not very predictive; however, knowing that the word is capitalized, that is a noun, that it is used in an appositive, and that it appears near the top of the article would all be quite predictive (in conjunction with the context provided by the state-transition structure)."[2]Useful sequence tagging features, such as these, are often non-independent. Maximum entropy models do not assume independence between features, but generative observation models used in HMMs do.[2]Therefore, MEMMs allow the user to specify many correlated, but informative features. Another advantage of MEMMs versus HMMs andconditional random fields(CRFs) is that training can be considerably more efficient. In HMMs and CRFs, one needs to use some version of theforward–backward algorithmas an inner loop in training[citation needed]. However, in MEMMs, estimating the parameters of the maximum-entropy distributions used for the transition probabilities can be done for each transition distribution in isolation. A drawback of MEMMs is that they potentially suffer from the "label bias problem," where states with low-entropy transition distributions "effectively ignore their observations." Conditional random fields were designed to overcome this weakness,[5]which had already been recognised in the context of neural network-based Markov models in the early 1990s.[5][6]Another source of label bias is that training is always done with respect to known previous tags, so the model struggles at test time when there is uncertainty in the previous tag.
https://en.wikipedia.org/wiki/Maximum-entropy_Markov_model
Multimodal learningis a type ofdeep learningthat integrates and processes multiple types of data, referred to asmodalities, such as text, audio, images, or video. This integration allows for a more holistic understanding of complex data, improving model performance in tasks like visual question answering, cross-modal retrieval,[1]text-to-image generation,[2]aesthetic ranking,[3]and image captioning.[4] Large multimodal models, such asGoogle GeminiandGPT-4o, have become increasingly popular since 2023, enabling increased versatility and a broader understanding of real-world phenomena.[5] Data usually comes with different modalities which carry different information. For example, it is very common to caption an image to convey the information not presented in the image itself. Similarly, sometimes it is more straightforward to use an image to describe information which may not be obvious from text. As a result, if different words appear in similar images, then these words likely describe the same thing. Conversely, if a word is used to describe seemingly dissimilar images, then these images may represent the same object. Thus, in cases dealing with multi-modal data, it is important to use a model which is able to jointly represent the information such that the model can capture the combined information from different modalities. Transformers can also be used/adapted for modalities (input or output) beyond just text, usually by finding a way to "tokenize" the modality. Multimodal models can either be trained from scratch, or by finetuning. A 2022 study found that Transformers pretrained only on natural language can be finetuned on only 0.03% of parameters and become competitive with LSTMs on a variety of logical and visual tasks, demonstratingtransfer learning.[6]The LLaVA was a vision-language model composed of a language model (Vicuna-13B)[7]and a vision model (ViT-L/14), connected by a linear layer. Only the linear layer is finetuned.[8] Vision transformers[9]adapt the transformer to computer vision by breaking down input images as a series of patches, turning them into vectors, and treating them like tokens in a standard transformer. Conformer[10]and laterWhisper[11]follow the same pattern forspeech recognition, first turning the speech signal into aspectrogram, which is then treated like an image, i.e. broken down into a series of patches, turned into vectors and treated like tokens in a standard transformer. Perceivers[12][13]are a variant of Transformers designed for multimodality. Multimodality means "having several modalities", and a"modality"refers to a type of input or output, such as video, image, audio, text,proprioception, etc.[19]There have been many AI models trained specifically to ingest one modality and output another modality, such asAlexNetfor image to label,[20]visual question answeringfor image-text to text,[21]andspeech recognitionfor speech to text. A common method to create multimodal models out of an LLM is to "tokenize" the output of a trained encoder. Concretely, one can construct an LLM that can understand images as follows: take a trained LLM, and take a trained image encoderE{\displaystyle E}. Make a small multilayered perceptronf{\displaystyle f}, so that for any imagey{\displaystyle y}, the post-processed vectorf(E(y)){\displaystyle f(E(y))}has the same dimensions as an encoded token. That is an "image token". Then, one can interleave text tokens and image tokens. The compound model is then fine-tuned on an image-text dataset. This basic construction can be applied with more sophistication to improve the model. The image encoder may be frozen to improve stability.[22] Flamingo demonstrated the effectiveness of the tokenization method, finetuning a pair of pretrained language model and image encoder to perform better on visual question answering than models trained from scratch.[23]Google PaLMmodel was fine-tuned into a multimodal model PaLM-E using the tokenization method, and applied to robotic control.[24]LLaMAmodels have also been turned multimodal using the tokenization method, to allow image inputs,[25]and video inputs.[26] ABoltzmann machineis a type ofstochastic neural networkinvented byGeoffrey HintonandTerry Sejnowskiin 1985. Boltzmann machines can be seen as thestochastic,generativecounterpart ofHopfield nets. They are named after theBoltzmann distributionin statistical mechanics. The units in Boltzmann machines are divided into two groups: visible units and hidden units. Each unit is like a neuron with a binary output that represents whether it is activated or not.[31]General Boltzmann machines allow connection between any units. However, learning is impractical using general Boltzmann Machines because the computational time is exponential to the size of the machine[citation needed]. A more efficient architecture is calledrestricted Boltzmann machinewhere connection is only allowed between hidden unit and visible unit, which is described in the next section. Multimodal deep Boltzmann machines can process and learn from different types of information, such as images and text, simultaneously. This can notably be done by having a separate deep Boltzmann machine for each modality, for example one for images and one for text, joined at an additional top hidden layer.[32] Multimodal machine learning has numerous applications across various domains: Cross-modal retrieval allows users to search for data across different modalities (e.g., retrieving images based on text descriptions), improving multimedia search engines and content recommendation systems. Models likeCLIPfacilitate efficient, accurate retrieval by embedding data in a shared space, demonstrating strong performance even in zero-shot settings.[33] Multimodal Deep Boltzmann Machines outperform traditional models likesupport vector machinesandlatent Dirichlet allocationin classification tasks and can predict missing data in multimodal datasets, such as images and text. Multimodal models integrate medical imaging, genomic data, and patient records to improve diagnostic accuracy and early disease detection, especially in cancer screening.[34][35][36] Models like DALL·E generate images from textual descriptions, benefiting creative industries, while cross-modal retrieval enables dynamic multimedia searches.[37] Multimodal learning improves interaction in robotics and AI by integrating sensory inputs like speech, vision, and touch, aiding autonomous systems and human-computer interaction. Combining visual, audio, and text data, multimodal systems enhance sentiment analysis and emotion recognition, applied in customer service, social media, and marketing.
https://en.wikipedia.org/wiki/Multimodal_learning
Inmachine learning,multiple-instance learning(MIL) is a type ofsupervised learning. Instead of receiving a set of instances which are individuallylabeled, the learner receives a set of labeledbags, each containing many instances. In the simple case of multiple-instancebinary classification, a bag may be labeled negative if all the instances in it are negative. On the other hand, a bag is labeled positive if there is at least one instance in it which is positive. From a collection of labeled bags, the learner tries to either (i) induce a concept that will label individual instances correctly or (ii) learn how to label bags without inducing the concept. Babenko (2008)[1]gives a simple example for MIL. Imagine several people, and each of them has a key chain that contains few keys. Some of these people are able to enter a certain room, and some aren't. The task is then to predict whether a certain key or a certain key chain can get you into that room. To solve this problem we need to find the exact key that is common for all the "positive" key chains. If we can correctly identify this key, we can also correctly classify an entire key chain - positive if it contains the required key, or negative if it doesn't. Depending on the type and variation in training data, machine learning can be roughly categorized into three frameworks: supervised learning, unsupervised learning, and reinforcement learning.Multiple instance learning (MIL)falls under the supervised learning framework, where every training instance has a label, either discrete or real valued. MIL deals with problems with incomplete knowledge of labels in training sets. More precisely, in multiple-instance learning, the training set consists of labeled "bags", each of which is a collection of unlabeled instances. A bag is positively labeled if at least one instance in it is positive, and is negatively labeled if all instances in it are negative. The goal of the MIL is to predict the labels of new, unseen bags. Keeler et al.,[2]in his work in the early 1990s was the first one to explore the area of MIL. The actual term multi-instance learning was introduced in the middle of the 1990s, by Dietterich et al. while they were investigating the problem of drug activity prediction.[3]They tried to create a learning system that could predict whether new molecule was qualified to make some drug, or not, through analyzing a collection of known molecules. Molecules can have many alternative low-energy states, but only one, or some of them, are qualified to make a drug. The problem arose because scientists could only determine if molecule is qualified, or not, but they couldn't say exactly which of its low-energy shapes are responsible for that. One of the proposed ways to solve this problem was to use supervised learning, and regard all the low-energy shapes of the qualified molecule as positive training instances, while all of the low-energy shapes of unqualified molecules as negative instances. Dietterich et al. showed that such method would have a high false positive noise, from all low-energy shapes that are mislabeled as positive, and thus wasn't really useful.[3]Their approach was to regard each molecule as a labeled bag, and all the alternative low-energy shapes of that molecule as instances in the bag, without individual labels. Thus formulating multiple-instance learning. Solution to the multiple instance learning problem that Dietterich et al. proposed is the axis-parallel rectangle (APR) algorithm.[3]It attempts to search for appropriate axis-parallel rectangles constructed by the conjunction of the features. They tested the algorithm on Musk dataset,[4][5][dubious–discuss]which is a concrete test data of drug activity prediction and the most popularly used benchmark in multiple-instance learning. APR algorithm achieved the best result, but APR was designed with Musk data in mind. Problem of multi-instance learning is not unique to drug finding. In 1998, Maron and Ratan found another application of multiple instance learning to scene classification in machine vision, and devised Diverse Density framework.[6]Given an image, an instance is taken to be one or more fixed-size subimages, and the bag of instances is taken to be the entire image. An image is labeled positive if it contains the target scene - a waterfall, for example - and negative otherwise. Multiple instance learning can be used to learn the properties of the subimages which characterize the target scene. From there on, these frameworks have been applied to a wide spectrum of applications, ranging from image concept learning and text categorization, to stock market prediction. Take image classification for exampleAmores (2013). Given an image, we want to know its target class based on its visual content. For instance, the target class might be "beach", where the image contains both "sand" and "water". InMILterms, the image is described as abagX={X1,..,XN}{\displaystyle X=\{X_{1},..,X_{N}\}}, where eachXi{\displaystyle X_{i}}is the feature vector (calledinstance) extracted from the correspondingi{\displaystyle i}-th region in the image andN{\displaystyle N}is the total regions (instances) partitioning the image. The bag is labeledpositive("beach") if it contains both "sand" region instances and "water" region instances. Examples of where MIL is applied are: Numerous researchers have worked on adapting classical classification techniques, such assupport vector machinesorboosting, to work within the context of multiple-instance learning. If the space of instances isX{\displaystyle {\mathcal {X}}}, then the set of bags is the set of functionsNX={B:X→N}{\displaystyle \mathbb {N} ^{\mathcal {X}}=\{B:{\mathcal {X}}\rightarrow \mathbb {N} \}}, which is isomorphic to the set of multi-subsets ofX{\displaystyle {\mathcal {X}}}. For each bagB∈NX{\displaystyle B\in \mathbb {N} ^{\mathcal {X}}}and each instancex∈X{\displaystyle x\in {\mathcal {X}}},B(x){\displaystyle B(x)}is viewed as the number of timesx{\displaystyle x}occurs inB{\displaystyle B}.[8]LetY{\displaystyle {\mathcal {Y}}}be the space of labels, then a "multiple instance concept" is a mapc:NX→Y{\displaystyle c:\mathbb {N} ^{\mathcal {X}}\rightarrow {\mathcal {Y}}}. The goal of MIL is to learn such a concept. The remainder of the article will focus onbinary classification, whereY={0,1}{\displaystyle {\mathcal {Y}}=\{0,1\}}. Most of the work on multiple instance learning, including Dietterich et al. (1997) and Maron & Lozano-Pérez (1997) early papers,[3][9]make the assumption regarding the relationship between the instances within a bag and the class label of the bag. Because of its importance, that assumption is often called standard MI assumption. The standard assumption takes each instancex∈X{\displaystyle x\in {\mathcal {X}}}to have an associated labely∈{0,1}{\displaystyle y\in \{0,1\}}which is hidden to the learner. The pair(x,y){\displaystyle (x,y)}is called an "instance-level concept". A bag is now viewed as a multiset of instance-level concepts, and is labeled positive if at least one of its instances has a positive label, and negative if all of its instances have negative labels. Formally, letB={(x1,y1),…,(xn,yn)}{\displaystyle B=\{(x_{1},y_{1}),\ldots ,(x_{n},y_{n})\}}be a bag. The label ofB{\displaystyle B}is thenc(B)=1−∏i=1n(1−yi){\displaystyle c(B)=1-\prod _{i=1}^{n}(1-y_{i})}. Standard MI assumption is asymmetric, which means that if the positive and negative labels are reversed, the assumption has a different meaning. Because of that, when we use this assumption, we need to be clear which label should be the positive one. The standard assumption might be viewed as too strict, and therefore in the recent years, researchers tried to relax that position, which gave rise to other more loose assumptions.[10]The reason for this is the belief that standard MIL assumption is appropriate for the Musk dataset, but since MIL can be applied to numerous other problems, some different assumptions could probably be more appropriate. Guided by that idea, Weidmann[11]formulated a hierarchy of generalized instance-based assumptions for MIL. It consists of the standard MI assumption and three types of generalized MI assumptions, each more general than the last, in the sense that the former can be obtained as a specific choice of parameters of the latter, standard⊂{\displaystyle \subset }presence-based⊂{\displaystyle \subset }threshold-based⊂{\displaystyle \subset }count-based, with the count-based assumption being the most general and the standard assumption being the least general. (Note however, that any bag meeting the count-based assumption meets the threshold-based assumption which in turn meets the presence-based assumption which, again in turn, meet the standard assumption. In that sense it is also correct to state that the standard assumption is the weakest, hence most general, and the count-based assumption is the strongest, hence least general.) One would expect an algorithm which performs well under one of these assumptions to perform at least as well under the less general assumptions. The presence-based assumption is a generalization of the standard assumption, wherein a bag must contain all instances that belong to a set of required instance-level concepts in order to be labeled positive. Formally, letCR⊆X×Y{\displaystyle C_{R}\subseteq {\mathcal {X}}\times {\mathcal {Y}}}be the set of required instance-level concepts, and let#(B,ci){\displaystyle \#(B,c_{i})}denote the number of times the instance-level conceptci{\displaystyle c_{i}}occurs in the bagB{\displaystyle B}. Thenc(B)=1⇔#(B,ci)≥1{\displaystyle c(B)=1\Leftrightarrow \#(B,c_{i})\geq 1}for allci∈CR{\displaystyle c_{i}\in C_{R}}. Note that, by takingCR{\displaystyle C_{R}}to contain only one instance-level concept, the presence-based assumption reduces to the standard assumption. A further generalization comes with the threshold-based assumption, where each required instance-level concept must occur not only once in a bag, but some minimum (threshold) number of times in order for the bag to be labeled positive. With the notation above, to each required instance-level conceptci∈CR{\displaystyle c_{i}\in C_{R}}is associated a thresholdli∈N{\displaystyle l_{i}\in \mathbb {N} }. For a bagB{\displaystyle B},c(B)=1⇔#(B,ci)≥li{\displaystyle c(B)=1\Leftrightarrow \#(B,c_{i})\geq l_{i}}for allci∈CR{\displaystyle c_{i}\in C_{R}}. The count-based assumption is a final generalization which enforces both lower and upper bounds for the number of times a required concept can occur in a positively labeled bag. Each required instance-level conceptci∈CR{\displaystyle c_{i}\in C_{R}}has a lower thresholdli∈N{\displaystyle l_{i}\in \mathbb {N} }and upper thresholdui∈N{\displaystyle u_{i}\in \mathbb {N} }withli≤ui{\displaystyle l_{i}\leq u_{i}}. A bagB{\displaystyle B}is labeled according toc(B)=1⇔li≤#(B,ci)≤ui{\displaystyle c(B)=1\Leftrightarrow l_{i}\leq \#(B,c_{i})\leq u_{i}}for allci∈CR{\displaystyle c_{i}\in C_{R}}. Scott, Zhang, and Brown (2005)[12]describe another generalization of the standard model, which they call "generalized multiple instance learning" (GMIL). The GMIL assumption specifies a set of required instancesQ⊆X{\displaystyle Q\subseteq {\mathcal {X}}}. A bagX{\displaystyle X}is labeled positive if it contains instances which are sufficiently close to at leastr{\displaystyle r}of the required instancesQ{\displaystyle Q}.[12]Under only this condition, the GMIL assumption is equivalent to the presence-based assumption.[8]However, Scott et al. describe a further generalization in which there is a set of attraction pointsQ⊆X{\displaystyle Q\subseteq {\mathcal {X}}}and a set of repulsion pointsQ¯⊆X{\displaystyle {\overline {Q}}\subseteq {\mathcal {X}}}. A bag is labeled positive if and only if it contains instances which are sufficiently close to at leastr{\displaystyle r}of the attraction points and are sufficiently close to at mosts{\displaystyle s}of the repulsion points.[12]This condition is strictly more general than the presence-based, though it does not fall within the above hierarchy. In contrast to the previous assumptions where the bags were viewed as fixed, the collective assumption views a bagB{\displaystyle B}as a distributionp(x|B){\displaystyle p(x|B)}over instancesX{\displaystyle {\mathcal {X}}}, and similarly view labels as a distributionp(y|x){\displaystyle p(y|x)}over instances. The goal of an algorithm operating under the collective assumption is then to model the distributionp(y|B)=∫Xp(y|x)p(x|B)dx{\displaystyle p(y|B)=\int _{\mathcal {X}}p(y|x)p(x|B)dx}. Sincep(x|B){\displaystyle p(x|B)}is typically considered fixed but unknown, algorithms instead focus on computing the empirical version:p^(y|B)=1nB∑i=1nBp(y|xi){\displaystyle {\widehat {p}}(y|B)={\frac {1}{n_{B}}}\sum _{i=1}^{n_{B}}p(y|x_{i})}, wherenB{\displaystyle n_{B}}is the number of instances in bagB{\displaystyle B}. Sincep(y|x){\displaystyle p(y|x)}is also typically taken to be fixed but unknown, most collective-assumption based methods focus on learning this distribution, as in the single-instance version.[8][10] While the collective assumption weights every instance with equal importance, Foulds extended the collective assumption to incorporate instance weights. The weighted collective assumption is then thatp^(y|B)=1wB∑i=1nBw(xi)p(y|xi){\displaystyle {\widehat {p}}(y|B)={\frac {1}{w_{B}}}\sum _{i=1}^{n_{B}}w(x_{i})p(y|x_{i})}, wherew:X→R+{\displaystyle w:{\mathcal {X}}\rightarrow \mathbb {R} ^{+}}is a weight function over instances andwB=∑x∈Bw(x){\displaystyle w_{B}=\sum _{x\in B}w(x)}.[8] There are two major flavors of algorithms for Multiple Instance Learning: instance-based and metadata-based, or embedding-based algorithms. The term "instance-based" denotes that the algorithm attempts to find a set of representative instances based on an MI assumption and classify future bags from these representatives. By contrast, metadata-based algorithms make no assumptions about the relationship between instances and bag labels, and instead try to extract instance-independent information (or metadata) about the bags in order to learn the concept.[10]For a survey of some of the modern MI algorithms see Foulds and Frank.[8] The earliest proposed MI algorithms were a set of "iterated-discrimination" algorithms developed by Dietterich et al., and Diverse Density developed by Maron and Lozano-Pérez.[3][9]Both of these algorithms operated under the standard assumption. Broadly, all of the iterated-discrimination algorithms consist of two phases. The first phase is to grow anaxis parallel rectangle(APR) which contains at least one instance from each positive bag and no instances from any negative bags. This is done iteratively: starting from a random instancex1∈B1{\displaystyle x_{1}\in B_{1}}in a positive bag, the APR is expanded to the smallest APR covering any instancex2{\displaystyle x_{2}}in a new positive bagB2{\displaystyle B_{2}}. This process is repeated until the APR covers at least one instance from each positive bag. Then, each instancexi{\displaystyle x_{i}}contained in the APR is given a "relevance", corresponding to how many negative points it excludes from the APR if removed. The algorithm then selects candidate representative instances in order of decreasing relevance, until no instance contained in a negative bag is also contained in the APR. The algorithm repeats these growth and representative selection steps until convergence, where APR size at each iteration is taken to be only along candidate representatives. After the first phase, the APR is thought to tightly contain only the representative attributes. The second phase expands this tight APR as follows: a Gaussian distribution is centered at each attribute and a looser APR is drawn such that positive instances will fall outside the tight APR with fixed probability.[4]Though iterated discrimination techniques work well with the standard assumption, they do not generalize well to other MI assumptions.[8] In its simplest form, Diverse Density (DD) assumes a single representative instancet∗{\displaystyle t^{*}}as the concept. This representative instance must be "dense" in that it is much closer to instances from positive bags than from negative bags, as well as "diverse" in that it is close to at least one instance from each positive bag. LetB+={Bi+}1m{\displaystyle {\mathcal {B}}^{+}=\{B_{i}^{+}\}_{1}^{m}}be the set of positively labeled bags and letB−={Bi−}1n{\displaystyle {\mathcal {B}}^{-}=\{B_{i}^{-}\}_{1}^{n}}be the set of negatively labeled bags, then the best candidate for the representative instance is given byt^=arg⁡maxtDD(t){\displaystyle {\hat {t}}=\arg \max _{t}DD(t)}, where the diverse densityDD(t)=Pr(t|B+,B−)=arg⁡maxt∏i=1mPr(t|Bi+)∏i=1nPr(t|Bi−){\displaystyle DD(t)=Pr\left(t|{\mathcal {B}}^{+},{\mathcal {B}}^{-}\right)=\arg \max _{t}\prod _{i=1}^{m}Pr\left(t|B_{i}^{+}\right)\prod _{i=1}^{n}Pr\left(t|B_{i}^{-}\right)}under the assumption that bags are independently distributed given the conceptt∗{\displaystyle t^{*}}. LettingBij{\displaystyle B_{ij}}denote the jth instance of bag i, the noisy-or model gives: P(t|Bij){\displaystyle P(t|B_{ij})}is taken to be the scaled distanceP(t|Bij)∝exp⁡(−∑ksk2(xk−(Bij)k)2){\displaystyle P(t|B_{ij})\propto \exp \left(-\sum _{k}s_{k}^{2}\left(x_{k}-(B_{ij})_{k}\right)^{2}\right)}wheres=(sk){\displaystyle s=(s_{k})}is the scaling vector. This way, if every positive bag has an instance close tot{\displaystyle t}, thenPr(t|Bi+){\displaystyle Pr(t|B_{i}^{+})}will be high for eachi{\displaystyle i}, but if any negative bagBi−{\displaystyle B_{i}^{-}}has an instance close tot{\displaystyle t},Pr(t|Bi−){\displaystyle Pr(t|B_{i}^{-})}will be low. Hence,DD(t){\displaystyle DD(t)}is high only if every positive bag has an instance close tot{\displaystyle t}and no negative bags have an instance close tot{\displaystyle t}. The candidate conceptt^{\displaystyle {\hat {t}}}can be obtained through gradient methods. Classification of new bags can then be done by evaluating proximity tot^{\displaystyle {\hat {t}}}.[9]Though Diverse Density was originally proposed by Maron et al. in 1998, more recent MIL algorithms use the DD framework, such as EM-DD in 2001[13]and DD-SVM in 2004,[14]and MILES in 2006[8] A number of single-instance algorithms have also been adapted to a multiple-instance context under the standard assumption, including Post 2000, there was a movement away from the standard assumption and the development of algorithms designed to tackle the more general assumptions listed above.[10] Because of the high dimensionality of the new feature space and the cost of explicitly enumerating all APRs of the original instance space, GMIL-1 is inefficient both in terms of computation and memory. GMIL-2 was developed as a refinement of GMIL-1 in an effort to improve efficiency. GMIL-2 pre-processes the instances to find a set of candidate representative instances. GMIL-2 then maps each bag to a Boolean vector, as in GMIL-1, but only considers APRs corresponding to unique subsets of the candidate representative instances. This significantly reduces the memory and computational requirements.[8] By mapping each bag to a feature vector of metadata, metadata-based algorithms allow the flexibility of using an arbitrary single-instance algorithm to perform the actual classification task. Future bags are simply mapped (embedded) into the feature space of metadata and labeled by the chosen classifier. Therefore, much of the focus for metadata-based algorithms is on what features or what type of embedding leads to effective classification. Note that some of the previously mentioned algorithms, such as TLC and GMIL could be considered metadata-based. They define two variations of kNN, Bayesian-kNN and citation-kNN, as adaptations of the traditional nearest-neighbor problem to the multiple-instance setting. So far this article has considered multiple instance learning exclusively in the context of binary classifiers. However, the generalizations of single-instance binary classifiers can carry over to the multiple-instance case. Recent reviews of the MIL literature include:
https://en.wikipedia.org/wiki/Multiple_instance_learning
Inmachine learning,multiple-instance learning(MIL) is a type ofsupervised learning. Instead of receiving a set of instances which are individuallylabeled, the learner receives a set of labeledbags, each containing many instances. In the simple case of multiple-instancebinary classification, a bag may be labeled negative if all the instances in it are negative. On the other hand, a bag is labeled positive if there is at least one instance in it which is positive. From a collection of labeled bags, the learner tries to either (i) induce a concept that will label individual instances correctly or (ii) learn how to label bags without inducing the concept. Babenko (2008)[1]gives a simple example for MIL. Imagine several people, and each of them has a key chain that contains few keys. Some of these people are able to enter a certain room, and some aren't. The task is then to predict whether a certain key or a certain key chain can get you into that room. To solve this problem we need to find the exact key that is common for all the "positive" key chains. If we can correctly identify this key, we can also correctly classify an entire key chain - positive if it contains the required key, or negative if it doesn't. Depending on the type and variation in training data, machine learning can be roughly categorized into three frameworks: supervised learning, unsupervised learning, and reinforcement learning.Multiple instance learning (MIL)falls under the supervised learning framework, where every training instance has a label, either discrete or real valued. MIL deals with problems with incomplete knowledge of labels in training sets. More precisely, in multiple-instance learning, the training set consists of labeled "bags", each of which is a collection of unlabeled instances. A bag is positively labeled if at least one instance in it is positive, and is negatively labeled if all instances in it are negative. The goal of the MIL is to predict the labels of new, unseen bags. Keeler et al.,[2]in his work in the early 1990s was the first one to explore the area of MIL. The actual term multi-instance learning was introduced in the middle of the 1990s, by Dietterich et al. while they were investigating the problem of drug activity prediction.[3]They tried to create a learning system that could predict whether new molecule was qualified to make some drug, or not, through analyzing a collection of known molecules. Molecules can have many alternative low-energy states, but only one, or some of them, are qualified to make a drug. The problem arose because scientists could only determine if molecule is qualified, or not, but they couldn't say exactly which of its low-energy shapes are responsible for that. One of the proposed ways to solve this problem was to use supervised learning, and regard all the low-energy shapes of the qualified molecule as positive training instances, while all of the low-energy shapes of unqualified molecules as negative instances. Dietterich et al. showed that such method would have a high false positive noise, from all low-energy shapes that are mislabeled as positive, and thus wasn't really useful.[3]Their approach was to regard each molecule as a labeled bag, and all the alternative low-energy shapes of that molecule as instances in the bag, without individual labels. Thus formulating multiple-instance learning. Solution to the multiple instance learning problem that Dietterich et al. proposed is the axis-parallel rectangle (APR) algorithm.[3]It attempts to search for appropriate axis-parallel rectangles constructed by the conjunction of the features. They tested the algorithm on Musk dataset,[4][5][dubious–discuss]which is a concrete test data of drug activity prediction and the most popularly used benchmark in multiple-instance learning. APR algorithm achieved the best result, but APR was designed with Musk data in mind. Problem of multi-instance learning is not unique to drug finding. In 1998, Maron and Ratan found another application of multiple instance learning to scene classification in machine vision, and devised Diverse Density framework.[6]Given an image, an instance is taken to be one or more fixed-size subimages, and the bag of instances is taken to be the entire image. An image is labeled positive if it contains the target scene - a waterfall, for example - and negative otherwise. Multiple instance learning can be used to learn the properties of the subimages which characterize the target scene. From there on, these frameworks have been applied to a wide spectrum of applications, ranging from image concept learning and text categorization, to stock market prediction. Take image classification for exampleAmores (2013). Given an image, we want to know its target class based on its visual content. For instance, the target class might be "beach", where the image contains both "sand" and "water". InMILterms, the image is described as abagX={X1,..,XN}{\displaystyle X=\{X_{1},..,X_{N}\}}, where eachXi{\displaystyle X_{i}}is the feature vector (calledinstance) extracted from the correspondingi{\displaystyle i}-th region in the image andN{\displaystyle N}is the total regions (instances) partitioning the image. The bag is labeledpositive("beach") if it contains both "sand" region instances and "water" region instances. Examples of where MIL is applied are: Numerous researchers have worked on adapting classical classification techniques, such assupport vector machinesorboosting, to work within the context of multiple-instance learning. If the space of instances isX{\displaystyle {\mathcal {X}}}, then the set of bags is the set of functionsNX={B:X→N}{\displaystyle \mathbb {N} ^{\mathcal {X}}=\{B:{\mathcal {X}}\rightarrow \mathbb {N} \}}, which is isomorphic to the set of multi-subsets ofX{\displaystyle {\mathcal {X}}}. For each bagB∈NX{\displaystyle B\in \mathbb {N} ^{\mathcal {X}}}and each instancex∈X{\displaystyle x\in {\mathcal {X}}},B(x){\displaystyle B(x)}is viewed as the number of timesx{\displaystyle x}occurs inB{\displaystyle B}.[8]LetY{\displaystyle {\mathcal {Y}}}be the space of labels, then a "multiple instance concept" is a mapc:NX→Y{\displaystyle c:\mathbb {N} ^{\mathcal {X}}\rightarrow {\mathcal {Y}}}. The goal of MIL is to learn such a concept. The remainder of the article will focus onbinary classification, whereY={0,1}{\displaystyle {\mathcal {Y}}=\{0,1\}}. Most of the work on multiple instance learning, including Dietterich et al. (1997) and Maron & Lozano-Pérez (1997) early papers,[3][9]make the assumption regarding the relationship between the instances within a bag and the class label of the bag. Because of its importance, that assumption is often called standard MI assumption. The standard assumption takes each instancex∈X{\displaystyle x\in {\mathcal {X}}}to have an associated labely∈{0,1}{\displaystyle y\in \{0,1\}}which is hidden to the learner. The pair(x,y){\displaystyle (x,y)}is called an "instance-level concept". A bag is now viewed as a multiset of instance-level concepts, and is labeled positive if at least one of its instances has a positive label, and negative if all of its instances have negative labels. Formally, letB={(x1,y1),…,(xn,yn)}{\displaystyle B=\{(x_{1},y_{1}),\ldots ,(x_{n},y_{n})\}}be a bag. The label ofB{\displaystyle B}is thenc(B)=1−∏i=1n(1−yi){\displaystyle c(B)=1-\prod _{i=1}^{n}(1-y_{i})}. Standard MI assumption is asymmetric, which means that if the positive and negative labels are reversed, the assumption has a different meaning. Because of that, when we use this assumption, we need to be clear which label should be the positive one. The standard assumption might be viewed as too strict, and therefore in the recent years, researchers tried to relax that position, which gave rise to other more loose assumptions.[10]The reason for this is the belief that standard MIL assumption is appropriate for the Musk dataset, but since MIL can be applied to numerous other problems, some different assumptions could probably be more appropriate. Guided by that idea, Weidmann[11]formulated a hierarchy of generalized instance-based assumptions for MIL. It consists of the standard MI assumption and three types of generalized MI assumptions, each more general than the last, in the sense that the former can be obtained as a specific choice of parameters of the latter, standard⊂{\displaystyle \subset }presence-based⊂{\displaystyle \subset }threshold-based⊂{\displaystyle \subset }count-based, with the count-based assumption being the most general and the standard assumption being the least general. (Note however, that any bag meeting the count-based assumption meets the threshold-based assumption which in turn meets the presence-based assumption which, again in turn, meet the standard assumption. In that sense it is also correct to state that the standard assumption is the weakest, hence most general, and the count-based assumption is the strongest, hence least general.) One would expect an algorithm which performs well under one of these assumptions to perform at least as well under the less general assumptions. The presence-based assumption is a generalization of the standard assumption, wherein a bag must contain all instances that belong to a set of required instance-level concepts in order to be labeled positive. Formally, letCR⊆X×Y{\displaystyle C_{R}\subseteq {\mathcal {X}}\times {\mathcal {Y}}}be the set of required instance-level concepts, and let#(B,ci){\displaystyle \#(B,c_{i})}denote the number of times the instance-level conceptci{\displaystyle c_{i}}occurs in the bagB{\displaystyle B}. Thenc(B)=1⇔#(B,ci)≥1{\displaystyle c(B)=1\Leftrightarrow \#(B,c_{i})\geq 1}for allci∈CR{\displaystyle c_{i}\in C_{R}}. Note that, by takingCR{\displaystyle C_{R}}to contain only one instance-level concept, the presence-based assumption reduces to the standard assumption. A further generalization comes with the threshold-based assumption, where each required instance-level concept must occur not only once in a bag, but some minimum (threshold) number of times in order for the bag to be labeled positive. With the notation above, to each required instance-level conceptci∈CR{\displaystyle c_{i}\in C_{R}}is associated a thresholdli∈N{\displaystyle l_{i}\in \mathbb {N} }. For a bagB{\displaystyle B},c(B)=1⇔#(B,ci)≥li{\displaystyle c(B)=1\Leftrightarrow \#(B,c_{i})\geq l_{i}}for allci∈CR{\displaystyle c_{i}\in C_{R}}. The count-based assumption is a final generalization which enforces both lower and upper bounds for the number of times a required concept can occur in a positively labeled bag. Each required instance-level conceptci∈CR{\displaystyle c_{i}\in C_{R}}has a lower thresholdli∈N{\displaystyle l_{i}\in \mathbb {N} }and upper thresholdui∈N{\displaystyle u_{i}\in \mathbb {N} }withli≤ui{\displaystyle l_{i}\leq u_{i}}. A bagB{\displaystyle B}is labeled according toc(B)=1⇔li≤#(B,ci)≤ui{\displaystyle c(B)=1\Leftrightarrow l_{i}\leq \#(B,c_{i})\leq u_{i}}for allci∈CR{\displaystyle c_{i}\in C_{R}}. Scott, Zhang, and Brown (2005)[12]describe another generalization of the standard model, which they call "generalized multiple instance learning" (GMIL). The GMIL assumption specifies a set of required instancesQ⊆X{\displaystyle Q\subseteq {\mathcal {X}}}. A bagX{\displaystyle X}is labeled positive if it contains instances which are sufficiently close to at leastr{\displaystyle r}of the required instancesQ{\displaystyle Q}.[12]Under only this condition, the GMIL assumption is equivalent to the presence-based assumption.[8]However, Scott et al. describe a further generalization in which there is a set of attraction pointsQ⊆X{\displaystyle Q\subseteq {\mathcal {X}}}and a set of repulsion pointsQ¯⊆X{\displaystyle {\overline {Q}}\subseteq {\mathcal {X}}}. A bag is labeled positive if and only if it contains instances which are sufficiently close to at leastr{\displaystyle r}of the attraction points and are sufficiently close to at mosts{\displaystyle s}of the repulsion points.[12]This condition is strictly more general than the presence-based, though it does not fall within the above hierarchy. In contrast to the previous assumptions where the bags were viewed as fixed, the collective assumption views a bagB{\displaystyle B}as a distributionp(x|B){\displaystyle p(x|B)}over instancesX{\displaystyle {\mathcal {X}}}, and similarly view labels as a distributionp(y|x){\displaystyle p(y|x)}over instances. The goal of an algorithm operating under the collective assumption is then to model the distributionp(y|B)=∫Xp(y|x)p(x|B)dx{\displaystyle p(y|B)=\int _{\mathcal {X}}p(y|x)p(x|B)dx}. Sincep(x|B){\displaystyle p(x|B)}is typically considered fixed but unknown, algorithms instead focus on computing the empirical version:p^(y|B)=1nB∑i=1nBp(y|xi){\displaystyle {\widehat {p}}(y|B)={\frac {1}{n_{B}}}\sum _{i=1}^{n_{B}}p(y|x_{i})}, wherenB{\displaystyle n_{B}}is the number of instances in bagB{\displaystyle B}. Sincep(y|x){\displaystyle p(y|x)}is also typically taken to be fixed but unknown, most collective-assumption based methods focus on learning this distribution, as in the single-instance version.[8][10] While the collective assumption weights every instance with equal importance, Foulds extended the collective assumption to incorporate instance weights. The weighted collective assumption is then thatp^(y|B)=1wB∑i=1nBw(xi)p(y|xi){\displaystyle {\widehat {p}}(y|B)={\frac {1}{w_{B}}}\sum _{i=1}^{n_{B}}w(x_{i})p(y|x_{i})}, wherew:X→R+{\displaystyle w:{\mathcal {X}}\rightarrow \mathbb {R} ^{+}}is a weight function over instances andwB=∑x∈Bw(x){\displaystyle w_{B}=\sum _{x\in B}w(x)}.[8] There are two major flavors of algorithms for Multiple Instance Learning: instance-based and metadata-based, or embedding-based algorithms. The term "instance-based" denotes that the algorithm attempts to find a set of representative instances based on an MI assumption and classify future bags from these representatives. By contrast, metadata-based algorithms make no assumptions about the relationship between instances and bag labels, and instead try to extract instance-independent information (or metadata) about the bags in order to learn the concept.[10]For a survey of some of the modern MI algorithms see Foulds and Frank.[8] The earliest proposed MI algorithms were a set of "iterated-discrimination" algorithms developed by Dietterich et al., and Diverse Density developed by Maron and Lozano-Pérez.[3][9]Both of these algorithms operated under the standard assumption. Broadly, all of the iterated-discrimination algorithms consist of two phases. The first phase is to grow anaxis parallel rectangle(APR) which contains at least one instance from each positive bag and no instances from any negative bags. This is done iteratively: starting from a random instancex1∈B1{\displaystyle x_{1}\in B_{1}}in a positive bag, the APR is expanded to the smallest APR covering any instancex2{\displaystyle x_{2}}in a new positive bagB2{\displaystyle B_{2}}. This process is repeated until the APR covers at least one instance from each positive bag. Then, each instancexi{\displaystyle x_{i}}contained in the APR is given a "relevance", corresponding to how many negative points it excludes from the APR if removed. The algorithm then selects candidate representative instances in order of decreasing relevance, until no instance contained in a negative bag is also contained in the APR. The algorithm repeats these growth and representative selection steps until convergence, where APR size at each iteration is taken to be only along candidate representatives. After the first phase, the APR is thought to tightly contain only the representative attributes. The second phase expands this tight APR as follows: a Gaussian distribution is centered at each attribute and a looser APR is drawn such that positive instances will fall outside the tight APR with fixed probability.[4]Though iterated discrimination techniques work well with the standard assumption, they do not generalize well to other MI assumptions.[8] In its simplest form, Diverse Density (DD) assumes a single representative instancet∗{\displaystyle t^{*}}as the concept. This representative instance must be "dense" in that it is much closer to instances from positive bags than from negative bags, as well as "diverse" in that it is close to at least one instance from each positive bag. LetB+={Bi+}1m{\displaystyle {\mathcal {B}}^{+}=\{B_{i}^{+}\}_{1}^{m}}be the set of positively labeled bags and letB−={Bi−}1n{\displaystyle {\mathcal {B}}^{-}=\{B_{i}^{-}\}_{1}^{n}}be the set of negatively labeled bags, then the best candidate for the representative instance is given byt^=arg⁡maxtDD(t){\displaystyle {\hat {t}}=\arg \max _{t}DD(t)}, where the diverse densityDD(t)=Pr(t|B+,B−)=arg⁡maxt∏i=1mPr(t|Bi+)∏i=1nPr(t|Bi−){\displaystyle DD(t)=Pr\left(t|{\mathcal {B}}^{+},{\mathcal {B}}^{-}\right)=\arg \max _{t}\prod _{i=1}^{m}Pr\left(t|B_{i}^{+}\right)\prod _{i=1}^{n}Pr\left(t|B_{i}^{-}\right)}under the assumption that bags are independently distributed given the conceptt∗{\displaystyle t^{*}}. LettingBij{\displaystyle B_{ij}}denote the jth instance of bag i, the noisy-or model gives: P(t|Bij){\displaystyle P(t|B_{ij})}is taken to be the scaled distanceP(t|Bij)∝exp⁡(−∑ksk2(xk−(Bij)k)2){\displaystyle P(t|B_{ij})\propto \exp \left(-\sum _{k}s_{k}^{2}\left(x_{k}-(B_{ij})_{k}\right)^{2}\right)}wheres=(sk){\displaystyle s=(s_{k})}is the scaling vector. This way, if every positive bag has an instance close tot{\displaystyle t}, thenPr(t|Bi+){\displaystyle Pr(t|B_{i}^{+})}will be high for eachi{\displaystyle i}, but if any negative bagBi−{\displaystyle B_{i}^{-}}has an instance close tot{\displaystyle t},Pr(t|Bi−){\displaystyle Pr(t|B_{i}^{-})}will be low. Hence,DD(t){\displaystyle DD(t)}is high only if every positive bag has an instance close tot{\displaystyle t}and no negative bags have an instance close tot{\displaystyle t}. The candidate conceptt^{\displaystyle {\hat {t}}}can be obtained through gradient methods. Classification of new bags can then be done by evaluating proximity tot^{\displaystyle {\hat {t}}}.[9]Though Diverse Density was originally proposed by Maron et al. in 1998, more recent MIL algorithms use the DD framework, such as EM-DD in 2001[13]and DD-SVM in 2004,[14]and MILES in 2006[8] A number of single-instance algorithms have also been adapted to a multiple-instance context under the standard assumption, including Post 2000, there was a movement away from the standard assumption and the development of algorithms designed to tackle the more general assumptions listed above.[10] Because of the high dimensionality of the new feature space and the cost of explicitly enumerating all APRs of the original instance space, GMIL-1 is inefficient both in terms of computation and memory. GMIL-2 was developed as a refinement of GMIL-1 in an effort to improve efficiency. GMIL-2 pre-processes the instances to find a set of candidate representative instances. GMIL-2 then maps each bag to a Boolean vector, as in GMIL-1, but only considers APRs corresponding to unique subsets of the candidate representative instances. This significantly reduces the memory and computational requirements.[8] By mapping each bag to a feature vector of metadata, metadata-based algorithms allow the flexibility of using an arbitrary single-instance algorithm to perform the actual classification task. Future bags are simply mapped (embedded) into the feature space of metadata and labeled by the chosen classifier. Therefore, much of the focus for metadata-based algorithms is on what features or what type of embedding leads to effective classification. Note that some of the previously mentioned algorithms, such as TLC and GMIL could be considered metadata-based. They define two variations of kNN, Bayesian-kNN and citation-kNN, as adaptations of the traditional nearest-neighbor problem to the multiple-instance setting. So far this article has considered multiple instance learning exclusively in the context of binary classifiers. However, the generalizations of single-instance binary classifiers can carry over to the multiple-instance case. Recent reviews of the MIL literature include:
https://en.wikipedia.org/wiki/Multiple-instance_learning
Parity learningis a problem inmachine learning. An algorithm that solves this problem must find a functionƒ, given some samples (x,ƒ(x)) and the assurance thatƒcomputes theparityof bits at some fixed locations. The samples are generated using some distribution over the input. The problem is easy to solve usingGaussian eliminationprovided that a sufficient number of samples (from a distribution which is not too skewed) are provided to the algorithm. In Learning Parity with Noise (LPN), the samples may contain some error. Instead of samples (x,ƒ(x)), the algorithm is provided with (x,y), where for random booleanb∈{0,1}{\displaystyle b\in \{0,1\}} y={f(x),ifb1−f(x),otherwise{\displaystyle y={\begin{cases}f(x),&{\text{if }}b\\1-f(x),&{\text{otherwise}}\end{cases}}} The noisy version of the parity learning problem is conjectured to be hard[1]and is widely used in cryptography.[2] Thisapplied mathematics–related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Parity_learning
Incomputer scienceandmachine learning,population-based incremental learning(PBIL) is anoptimizationalgorithm, and anestimation of distribution algorithm. This is a type ofgenetic algorithmwhere thegenotypeof an entire population (probabilityvector) is evolved rather than individual members.[1]The algorithm is proposed by Shumeet Baluja in 1994. The algorithm is simpler than a standard genetic algorithm, and in many cases leads to better results than a standard genetic algorithm.[2][3][4] In PBIL, genes are represented as real values in the range [0,1], indicating the probability that any particularalleleappears in thatgene. The PBIL algorithm is as follows: This is a part of source code implemented inJava. In the paper, learnRate = 0.1, negLearnRate = 0.075, mutProb = 0.02, and mutShift = 0.05 is used. N = 100 and ITER_COUNT = 1000 is enough for a small problem.
https://en.wikipedia.org/wiki/Population-based_incremental_learning
Predictive learningis amachine learning(ML) technique where anartificial intelligencemodel is fed new data to develop an understanding of its environment, capabilities, and limitations. This technique finds application in many areas, includingneuroscience,business,robotics, andcomputer vision. This concept was developed and expanded by French computer scientistYann LeCunin 1988 during his career atBell Labs, where he trained models to detect handwriting so that financial companies could automate check processing.[1] The mathematical foundation for predictive learning dates back to the 17th century, where British insurance companyLloyd'susedpredictive analyticsto make a profit.[2]Starting out as a mathematical concept, this method expanded the possibilities of artificial intelligence. Predictive learning is an attempt to learn with a minimum of pre-existing mental structure. It was inspired byJean Piaget's account of childrenconstructing knowledge of the world through interaction.Gary Drescher's bookMade-up Mindswas crucial to the development of this concept.[3] The idea that predictions andunconscious inferenceare used by the brain to construct a model of the world, in which it can identify causes ofpercepts, goes back even further toHermann von Helmholtz's iteration of this study. These ideas were further developed by the field ofpredictive coding. Another related predictive learning theory isJeff Hawkins'memory-prediction framework, which is laid out in his bookOn Intelligence. Similar to ML, predictive learning aims to extrapolate the value of an unknown dependent variableY{\displaystyle Y}, given independent input dataX=(x1,x2,…,xn){\displaystyle X=(x_{1},x_{2},\dots ,x_{n})}. A set of attributes can be classified intocategoricaldata (discrete factors such as race, sex, or affiliation) or numerical data (continuous values such as temperature, annual income, or speed). Every set of input values is fed into aneural networkto predict a valuey{\displaystyle y}. In order to predict the output accurately, theweightsof the neural network (which represent how much each predictor variable affects the outcome) must be incrementally adjusted viabackpropagationto produce estimates closer to the actual data. Once an ML model is given enough adjustments through training to predict values closer to theground truth, it should be able to correctly predict outputs of new data with littleerror. In order to ensure maximum accuracy for a predictive learning model, the predicted valuesy^=F(x){\displaystyle {\hat {y}}=F(x)}must not exceed a certain error threshold when compared to actual valuesy{\displaystyle y}by the risk formula: whereL{\displaystyle L}is theloss function,y{\displaystyle y}is the ground truth, andF(x){\displaystyle F(x)}is the predicted data. This error function is used to make incremental adjustments to the model's weights to eventually reach a well-trained prediction of:[4] Once the error is negligible or considered small enough after training, the model is said to haveconverged. In some cases, using a singular machine learning approach is not enough to create an accurate estimate for certain data.Ensemble learningis the combination of several ML algorithms to create a stronger model. Each model is represented by the function whereM{\displaystyle M}is the number of ensemble models,a0{\displaystyle a_{0}}is the bias,am{\displaystyle a_{m}}is the weight corresponding to eachm{\displaystyle m}-th variable, andfm(x){\displaystyle f_{m}(x)}is theactivation functioncorresponding to each variable. An ensemble learning model is represented as alinear combinationof the predictions from each constituent approach, whereyi{\displaystyle y_{i}}is the actual value, the second parameter is the value predicted by each constituent method, andλ{\displaystyle \lambda }is a coefficient representing each model's variation for a certain predictor variable.[4] Sensorimotor signals are neural impulses sent to the brain upon physical touch. Using predictive learning to detect sensorimotor signals plays a key role in earlycognitive development, as the human brain represents sensorimotor signals in a predictive manner (it attempts to minimize prediction error between incomingsensory signalsandtop–down prediction). In order to update an unadjusted predictor, it must be trained through sensorimotor experiences because it does not inherently have prediction ability.[5]In a recent research paper, Dr. Yukie Nagai suggested anew architecturein predictive learning to predict sensorimotor signals based on a two-module approach: a sensorimotor system which interacts with the environment and a predictor which simulates the sensorimotor system in the brain.[5] Computers use predictive learning in spatiotemporal memory to completely create an image given constituentframes. This implementation uses predictiverecurrent neural networks, which are neural networks designed to work with sequential data, such as atime series.[citation needed]Using predictive learning in conjunction with computer vision enables computers to create images of their own, which can be helpful when replicating sequential phenomena such as replicating DNA strands, face recognition, or even creating X-ray images. In a recent study, data on consumer behavior was collected from various social media platforms such as Facebook, Twitter, LinkedIn, YouTube, Instagram, and Pinterest. The usage of predictive learning analytics led researchers to discover various trends in consumer behavior, such as determining how successful a campaign could be, estimating a fair price for a product to attract consumers, assessing how secure data is, and analyzing the specific audience of the consumers they could target for specific products.[6]
https://en.wikipedia.org/wiki/Predictive_learning
Preference learningis a subfield ofmachine learningthat focuses on modeling and predicting preferences based on observed preference information.[1]Preference learning typically involvessupervised learningusing datasets of pairwise preference comparisons, rankings, or other preference information. The main task in preference learning concerns problems in "learning to rank". According to different types of preference information observed, the tasks are categorized as three main problems in the bookPreference Learning:[2] In label ranking, the model has an instance spaceX={xi}{\displaystyle X=\{x_{i}\}\,\!}and a finite set of labelsY={yi|i=1,2,⋯,k}{\displaystyle Y=\{y_{i}|i=1,2,\cdots ,k\}\,\!}. The preference information is given in the formyi≻xyj{\displaystyle y_{i}\succ _{x}y_{j}\,\!}indicating instancex{\displaystyle x\,\!}shows preference inyi{\displaystyle y_{i}\,\!}rather thanyj{\displaystyle y_{j}\,\!}. A set of preference information is used as training data in the model. The task of this model is to find a preference ranking among the labels for any instance. It was observed that some conventionalclassificationproblems can be generalized in the framework of label ranking problem:[3]if a training instancex{\displaystyle x\,\!}is labeled as classyi{\displaystyle y_{i}\,\!}, it implies that∀j≠i,yi≻xyj{\displaystyle \forall j\neq i,y_{i}\succ _{x}y_{j}\,\!}. In themulti-labelcase,x{\displaystyle x\,\!}is associated with a set of labelsL⊆Y{\displaystyle L\subseteq Y\,\!}and thus the model can extract a set of preference information{yi≻xyj|yi∈L,yj∈Y∖L}{\displaystyle \{y_{i}\succ _{x}y_{j}|y_{i}\in L,y_{j}\in Y\backslash L\}\,\!}. Training a preference model on this preference information and the classification result of an instance is just the corresponding top ranking label. Instance ranking also has the instance spaceX{\displaystyle X\,\!}and label setY{\displaystyle Y\,\!}. In this task, labels are defined to have a fixed ordery1≻y2≻⋯≻yk{\displaystyle y_{1}\succ y_{2}\succ \cdots \succ y_{k}\,\!}and each instancexl{\displaystyle x_{l}\,\!}is associated with a labelyl{\displaystyle y_{l}\,\!}. Giving a set of instances as training data, the goal of this task is to find the ranking order for a new set of instances. Object ranking is similar to instance ranking except that no labels are associated with instances. Given a set of pairwise preference information in the formxi≻xj{\displaystyle x_{i}\succ x_{j}\,\!}and the model should find out a ranking order among instances. There are two practical representations of the preference informationA≻B{\displaystyle A\succ B\,\!}. One is assigningA{\displaystyle A\,\!}andB{\displaystyle B\,\!}with two real numbersa{\displaystyle a\,\!}andb{\displaystyle b\,\!}respectively such thata>b{\displaystyle a>b\,\!}. Another one is assigning a binary valueV(A,B)∈{0,1}{\displaystyle V(A,B)\in \{0,1\}\,\!}for all pairs(A,B){\displaystyle (A,B)\,\!}denoting whetherA≻B{\displaystyle A\succ B\,\!}orB≻A{\displaystyle B\succ A\,\!}. Corresponding to these two different representations, there are two different techniques applied to the learning process. If we can find a mapping from data to real numbers, ranking the data can be solved by ranking the real numbers. This mapping is calledutility function. For label ranking the mapping is a functionf:X×Y→R{\displaystyle f:X\times Y\rightarrow \mathbb {R} \,\!}such thatyi≻xyj⇒f(x,yi)>f(x,yj){\displaystyle y_{i}\succ _{x}y_{j}\Rightarrow f(x,y_{i})>f(x,y_{j})\,\!}. For instance ranking and object ranking, the mapping is a functionf:X→R{\displaystyle f:X\rightarrow \mathbb {R} \,\!}. Finding the utility function is aregressionlearning problem[citation needed]which is well developed in machine learning. The binary representation of preference information is called preference relation. For each pair of alternatives (instances or labels), a binary predicate can be learned by conventional supervised learning approach. Fürnkranz and Hüllermeier proposed this approach in label ranking problem.[4]For object ranking, there is an early approach by Cohen et al.[5] Using preference relations to predict the ranking will not be so intuitive. Since observed preference relations may not always be transitive due to inconsistencies in the data, finding a ranking that satisfies all the preference relations may not be possible or may result in multiple possible solutions. A more common approach is to find a ranking solution which is maximally consistent with the preference relations. This approach is a natural extension of pairwise classification.[4] Preference learning can be used in ranking search results according to feedback of user preference. Given a query and a set of documents, a learning model is used to find the ranking of documents corresponding to therelevancewith this query. More discussions on research in this field can be found inTie-Yan Liu's survey paper.[6] Another application of preference learning isrecommender systems.[7]Online store may analyze customer's purchase record to learn a preference model and then recommend similar products to customers. Internet content providers can make use of user's ratings to provide more user preferred contents.
https://en.wikipedia.org/wiki/Preference_learning
Proactive learning[1]is a generalization ofactive learningdesigned to relax unrealistic assumptions and thereby reach practical applications. "In real life, it is possible and more general to have multiple sources of information with differing reliabilities or areas of expertise. Active learning also assumes that the single oracle is perfect, always providing a correct answer when requested. In reality, though, an "oracle" (if we generalize the term to mean any source of expert information) may be incorrect (fallible) with a probability that should be a function of the difficulty of the question. Moreover, an oracle may be reluctant – it may refuse to answer if it is too uncertain or too busy. Finally, active learning presumes the oracle is either free or charges uniform cost in label elicitation. Such an assumption is naive since cost is likely to be regulated by difficulty (amount of work required to formulate an answer) or other factors."[1] Proactive learning relaxes all four of these assumptions, relying on a decision-theoretic approach to jointly select the optimal oracle and instance, by casting the problem as a utilityoptimization problemsubject to abudget constraint.
https://en.wikipedia.org/wiki/Proactive_learning
Inmachine learning,semantic analysisof atext corpusis the task of building structures that approximate concepts from a large set of documents. It generally does not involve prior semantic understanding of the documents. Semantic analysis strategies include: Thiscomputer sciencearticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Semantic_analysis_(machine_learning)
Statistical learning theoryis a framework formachine learningdrawing from the fields ofstatisticsandfunctional analysis.[1][2][3]Statistical learning theory deals with thestatistical inferenceproblem of finding a predictive function based on data. Statistical learning theory has led to successful applications in fields such ascomputer vision,speech recognition, andbioinformatics. The goals of learning are understanding and prediction. Learning falls into many categories, includingsupervised learning,unsupervised learning,online learning, andreinforcement learning. From the perspective of statistical learning theory, supervised learning is best understood.[4]Supervised learning involves learning from atraining setof data. Every point in the training is an input–output pair, where the input maps to an output. The learning problem consists of inferring the function that maps between the input and the output, such that the learned function can be used to predict the output from future input. Depending on the type of output, supervised learning problems are either problems ofregressionor problems ofclassification. If the output takes a continuous range of values, it is a regression problem. UsingOhm's lawas an example, a regression could be performed with voltage as input and current as an output. The regression would find the functional relationship between voltage and current to beR{\displaystyle R},such thatV=IR{\displaystyle V=IR}Classification problems are those for which the output will be an element from a discrete set of labels. Classification is very common for machine learning applications. Infacial recognition, for instance, a picture of a person's face would be the input, and the output label would be that person's name. The input would be represented by a large multidimensional vector whose elements represent pixels in the picture. After learning a function based on the training set data, that function is validated on a test set of data, data that did not appear in the training set. TakeX{\displaystyle X}to be thevector spaceof all possible inputs, andY{\displaystyle Y}to be the vector space of all possible outputs. Statistical learning theory takes the perspective that there is some unknownprobability distributionover the product spaceZ=X×Y{\displaystyle Z=X\times Y}, i.e. there exists some unknownp(z)=p(x,y){\displaystyle p(z)=p(\mathbf {x} ,y)}. The training set is made up ofn{\displaystyle n}samples from this probability distribution, and is notatedS={(x1,y1),…,(xn,yn)}={z1,…,zn}{\displaystyle S=\{(\mathbf {x} _{1},y_{1}),\dots ,(\mathbf {x} _{n},y_{n})\}=\{\mathbf {z} _{1},\dots ,\mathbf {z} _{n}\}}Everyxi{\displaystyle \mathbf {x} _{i}}is an input vector from the training data, andyi{\displaystyle y_{i}}is the output that corresponds to it. In this formalism, the inference problem consists of finding a functionf:X→Y{\displaystyle f:X\to Y}such thatf(x)∼y{\displaystyle f(\mathbf {x} )\sim y}. LetH{\displaystyle {\mathcal {H}}}be a space of functionsf:X→Y{\displaystyle f:X\to Y}called the hypothesis space. The hypothesis space is the space of functions the algorithm will search through. LetV(f(x),y){\displaystyle V(f(\mathbf {x} ),y)}be theloss function, a metric for the difference between the predicted valuef(x){\displaystyle f(\mathbf {x} )}and the actual valuey{\displaystyle y}. Theexpected riskis defined to beI[f]=∫X×YV(f(x),y)p(x,y)dxdy{\displaystyle I[f]=\int _{X\times Y}V(f(\mathbf {x} ),y)\,p(\mathbf {x} ,y)\,d\mathbf {x} \,dy}The target function, the best possible functionf{\displaystyle f}that can be chosen, is given by thef{\displaystyle f}that satisfiesf=argminh∈H⁡I[h]{\displaystyle f=\mathop {\operatorname {argmin} } _{h\in {\mathcal {H}}}I[h]} Because the probability distributionp(x,y){\displaystyle p(\mathbf {x} ,y)}is unknown, a proxy measure for the expected risk must be used. This measure is based on the training set, a sample from this unknown probability distribution. It is called theempirical riskIS[f]=1n∑i=1nV(f(xi),yi){\displaystyle I_{S}[f]={\frac {1}{n}}\sum _{i=1}^{n}V(f(\mathbf {x} _{i}),y_{i})}A learning algorithm that chooses the functionfS{\displaystyle f_{S}}that minimizes the empirical risk is calledempirical risk minimization. The choice of loss function is a determining factor on the functionfS{\displaystyle f_{S}}that will be chosen by the learning algorithm. The loss function also affects the convergence rate for an algorithm. It is important for the loss function to beconvex.[5] Different loss functions are used depending on whether the problem is one of regression or one of classification. The most common loss function for regression is the square loss function (also known as theL2-norm). This familiar loss function is used inOrdinary Least Squares regression. The form is:V(f(x),y)=(y−f(x))2{\displaystyle V(f(\mathbf {x} ),y)=(y-f(\mathbf {x} ))^{2}} The absolute value loss (also known as theL1-norm) is also sometimes used:V(f(x),y)=|y−f(x)|{\displaystyle V(f(\mathbf {x} ),y)=|y-f(\mathbf {x} )|} In some sense the 0-1indicator functionis the most natural loss function for classification. It takes the value 0 if the predicted output is the same as the actual output, and it takes the value 1 if the predicted output is different from the actual output. For binary classification withY={−1,1}{\displaystyle Y=\{-1,1\}}, this is:V(f(x),y)=θ(−yf(x)){\displaystyle V(f(\mathbf {x} ),y)=\theta (-yf(\mathbf {x} ))}whereθ{\displaystyle \theta }is theHeaviside step function. In machine learning problems, a major problem that arises is that ofoverfitting. Because learning is a prediction problem, the goal is not to find a function that most closely fits the (previously observed) data, but to find one that will most accurately predict output from future input.Empirical risk minimizationruns this risk of overfitting: finding a function that matches the data exactly but does not predict future output well. Overfitting is symptomatic of unstable solutions; a small perturbation in the training set data would cause a large variation in the learned function. It can be shown that if the stability for the solution can be guaranteed, generalization and consistency are guaranteed as well.[6][7]Regularizationcan solve the overfitting problem and give the problem stability. Regularization can be accomplished by restricting the hypothesis spaceH{\displaystyle {\mathcal {H}}}. A common example would be restrictingH{\displaystyle {\mathcal {H}}}to linear functions: this can be seen as a reduction to the standard problem oflinear regression.H{\displaystyle {\mathcal {H}}}could also be restricted to polynomial of degreep{\displaystyle p}, exponentials, or bounded functions onL1. Restriction of the hypothesis space avoids overfitting because the form of the potential functions are limited, and so does not allow for the choice of a function that gives empirical risk arbitrarily close to zero. One example of regularization isTikhonov regularization. This consists of minimizing1n∑i=1nV(f(xi),yi)+γ‖f‖H2{\displaystyle {\frac {1}{n}}\sum _{i=1}^{n}V(f(\mathbf {x} _{i}),y_{i})+\gamma \left\|f\right\|_{\mathcal {H}}^{2}}whereγ{\displaystyle \gamma }is a fixed and positive parameter, the regularization parameter. Tikhonov regularization ensures existence, uniqueness, and stability of the solution.[8] Consider a binary classifierf:X→{0,1}{\displaystyle f:{\mathcal {X}}\to \{0,1\}}. We can applyHoeffding's inequalityto bound the probability that the empirical risk deviates from the true risk to be aSub-Gaussian distribution.P(|R^(f)−R(f)|≥ϵ)≤2e−2nϵ2{\displaystyle \mathbb {P} (|{\hat {R}}(f)-R(f)|\geq \epsilon )\leq 2e^{-2n\epsilon ^{2}}}But generally, when we do empirical risk minimization, we are not given a classifier; we must choose it. Therefore, a more useful result is to bound the probability of the supremum of the difference over the whole class.P(supf∈F|R^(f)−R(f)|≥ϵ)≤2S(F,n)e−nϵ2/8≈nde−nϵ2/8{\displaystyle \mathbb {P} {\bigg (}\sup _{f\in {\mathcal {F}}}|{\hat {R}}(f)-R(f)|\geq \epsilon {\bigg )}\leq 2S({\mathcal {F}},n)e^{-n\epsilon ^{2}/8}\approx n^{d}e^{-n\epsilon ^{2}/8}}whereS(F,n){\displaystyle S({\mathcal {F}},n)}is theshattering numberandn{\displaystyle n}is the number of samples in your dataset. The exponential term comes from Hoeffding but there is an extra cost of taking the supremum over the whole class, which is the shattering number.
https://en.wikipedia.org/wiki/Statistical_learning_theory
Tanagrais a free suite ofmachine learningsoftware for research and academic purposes developed byRicco Rakotomalalaat theLumière University Lyon 2, France.[1][2]Tanagra supports several standarddata miningtasks such as: Visualization, Descriptive statistics, Instance selection,feature selection, feature construction,regression,factor analysis,clustering,classificationandassociation rule learning. Tanagra is an academic project. It is widely used in French-speaking universities.[3]Tanagra is frequently used in real studies[4]and in software comparison papers. The development of Tanagra was started in June 2003. The first version was distributed in December 2003. Tanagra is the successor of Sipina, another free data mining tool which is intended only for supervised learning tasks (classification), especially the interactive and visual construction of decision trees. Sipina is still available online and is maintained. Tanagra is an "open source project" as every researcher can access the source code and add their own algorithms, as long as they agree and conform to the software distribution license. The main purpose of the Tanagra project is to give researchers and students a user-friendly data mining software, conforming to the present norms of the software development in this domain (especially in the design of its GUI and the way to use it), and allowing the analyzation of either real or synthetic data. From 2006, Ricco Rakotomalala made an important documentation effort. A large number of tutorials are published on a dedicated website. They describe the statistical and machine learning methods and their implementation with Tanagra on real case studies. The use of other free data mining tools on the same problems is also widely described. The comparison of the tools enables readers to understand the possible differences in the presentation of results. Tanagra works similarly to current data mining tools. The user can design visually a data mining process in a diagram. Each node is a statistical or machine learning technique, the connection between two nodes represents the data transfer. But unlike the majority of tools which are based on the workflow paradigm, Tanagra is very simplified. The treatments are represented in a tree diagram. The results are displayed in an HTML format. This makes it is easy to export the outputs in order to visualize the results in a browser. It is also possible to copy the result tables to a spreadsheet. Tanagra makes a good compromise between statistical approaches (e.g. parametric and nonparametric statistical tests), multivariate analysis methods (e.g. factor analysis, correspondence analysis, cluster analysis, regression) and machine learning techniques (e.g. neural network, support vector machine, decision trees, random forest).
https://en.wikipedia.org/wiki/Tanagra_(machine_learning)
Version space learningis alogicalapproach tomachine learning, specificallybinary classification. Version space learning algorithms search a predefined space ofhypotheses, viewed as a set oflogical sentences. Formally, the hypothesis space is adisjunction[1] (i.e., one or more of hypotheses 1 throughnare true). A version space learning algorithm is presented with examples, which it will use to restrict its hypothesis space; for each examplex, the hypotheses that areinconsistentwithxare removed from the space.[2]This iterative refining of the hypothesis space is called thecandidate eliminationalgorithm, the hypothesis space maintained inside the algorithm, itsversion space.[1] In settings where there is a generality-ordering on hypotheses, it is possible to represent the version space by two sets of hypotheses: (1) themost specificconsistent hypotheses, and (2) themost generalconsistent hypotheses, where "consistent" indicates agreement with observed data. The most specific hypotheses (i.e., the specific boundarySB) cover the observed positive training examples, and as little of the remainingfeature spaceas possible. These hypotheses, if reduced any further,excludeapositivetraining example, and hence become inconsistent. These minimal hypotheses essentially constitute a (pessimistic) claim that the true concept is defined just by thepositivedata already observed: Thus, if a novel (never-before-seen) data point is observed, it should be assumed to be negative. (I.e., if data has not previously been ruled in, then it's ruled out.) The most general hypotheses (i.e., the general boundaryGB) cover the observed positive training examples, but also cover as much of the remaining feature space without including any negative training examples. These, if enlarged any further,includeanegativetraining example, and hence become inconsistent. These maximal hypotheses essentially constitute a (optimistic) claim that the true concept is defined just by thenegativedata already observed: Thus, if a novel (never-before-seen) data point is observed, it should be assumed to be positive. (I.e., if data has not previously been ruled out, then it's ruled in.) Thus, during learning, the version space (which itself is a set – possibly infinite – containingallconsistent hypotheses) can be represented by just its lower and upper bounds (maximally general and maximally specific hypothesis sets), and learning operations can be performed just on these representative sets. After learning, classification can be performed on unseen examples by testing the hypothesis learned by the algorithm. If the example is consistent with multiple hypotheses, a majority vote rule can be applied.[1] The notion of version spaces was introduced by Mitchell in the early 1980s[2]as a framework for understanding the basic problem of supervised learning within the context ofsolution search. Although the basic "candidate elimination" search method that accompanies the version space framework is not a popular learning algorithm, there are some practical implementations that have been developed (e.g., Sverdlik & Reynolds 1992, Hong & Tsang 1997, Dubois & Quafafou 2002). A major drawback of version space learning is its inability to deal with noise: any pair of inconsistent examples can cause the version space tocollapse, i.e., become empty, so that classification becomes impossible.[1]One solution of this problem is proposed by Dubois and Quafafou that proposed the Rough Version Space,[3]where rough sets based approximations are used to learn certain and possible hypothesis in the presence of inconsistent data.
https://en.wikipedia.org/wiki/Version_space_learning
Wafflesis a collection of command-line tools for performingmachine learningoperations developed atBrigham Young University. These tools are written inC++, and are available under theGNU Lesser General Public License. The Waffles machine learning toolkit[1]contains command-line tools for performing various operations related tomachine learning,data mining, andpredictive modeling. The primary focus of Waffles is to provide tools that are simple to use in scripted experiments or processes. For example, the supervised learning algorithms included in Waffles are all designed to support multi-dimensional labels,classificationandregression, automatically impute missing values, and automatically apply necessary filters to transform the data to a type that the algorithm can support, such that arbitrary learning algorithms can be used with arbitrary data sets. Many other machine learning toolkits provide similar functionality, but require the user to explicitly configure data filters and transformations to make it compatible with a particular learning algorithm. The algorithms provided in Waffles also have the ability to automatically tune their own parameters (with the cost of additional computational overhead). Because Waffles is designed for script-ability, it deliberately avoids presenting its tools in a graphical environment. It does, however, include a graphical "wizard" tool that guides the user to generate a command that will perform a desired task. This wizard does not actually perform the operation, but requires the user to paste the command that it generates into a command terminal or a script. The idea motivating this design is to prevent the user from becoming "locked in" to a graphical interface. All of the Waffles tools are implemented as thin wrappers around functionality in a C++ class library. This makes it possible to convert scripted processes into native applications with minimal effort. Waffles was first released as an open source project in 2005. Since that time, it has been developed atBrigham Young University, with a new version having been released approximately every 6–9 months. Waffles is not an acronym—the toolkit was named after the food for historical reasons. Some of the advantages of Waffles in contrast with other popular open source machine learning toolkits include:
https://en.wikipedia.org/wiki/Waffles_(machine_learning)
Waikato Environment for Knowledge Analysis(Weka) is a collection of machine learning and data analysisfree softwarelicensed under theGNU General Public License. It was developed at theUniversity of Waikato,New Zealandand is the companion software to the book "Data Mining: Practical Machine Learning Tools and Techniques".[1] Weka contains a collection of visualization tools and algorithms fordata analysisandpredictive modeling, together with graphical user interfaces for easy access to these functions.[1]The original non-Java version of Weka was aTcl/Tkfront-end to (mostly third-party) modeling algorithms implemented in other programming languages, plusdata preprocessingutilities inC, and amakefile-based system for running machine learning experiments. This original version was primarily designed as a tool for analyzing data from agricultural domains,[2][3]but the more recent fullyJava-based version (Weka 3), for which development started in 1997, is now used in many different application areas, in particular for educational purposes and research. Advantages of Weka include: Weka supports several standarddata miningtasks, more specifically, data preprocessing,clustering,classification,regression,visualization, andfeature selection. Input to Weka is expected to be formatted according the Attribute-Relational File Format and with the filename bearing the .arff extension. All of Weka's techniques are predicated on the assumption that the data is available as one flat file or relation, where each data point is described by a fixed number of attributes (normally, numeric or nominal attributes, but some other attribute types are also supported). Weka provides access toSQLdatabasesusingJava Database Connectivityand can process the result returned by a database query. Weka provides access todeep learningwithDeeplearning4j.[4]It is not capable of multi-relational data mining, but there is separate software for converting a collection of linked database tables into a single table that is suitable for processing using Weka.[5]Another important area that is currently not covered by the algorithms included in the Weka distribution is sequence modeling. In version 3.7.2, a package manager was added to allow the easier installation of extension packages.[6]Some functionality that used to be included with Weka prior to this version has since been moved into such extension packages, but this change also makes it easier for others to contribute extensions to Weka and to maintain the software, as this modular architecture allows independent updates of the Weka core and individual extensions.
https://en.wikipedia.org/wiki/Weka_(machine_learning)
The Taguchi loss functionis graphical depiction oflossdeveloped by the Japanese business statisticianGenichi Taguchito describe a phenomenon affecting the value of products produced by a company. Praised by Dr.W. Edwards Deming(the business guru of the 1980s Americanqualitymovement),[1]it made clear the concept that quality does not suddenly plummet when, for instance, a machinist exceeds a rigid blueprint tolerance. Instead 'loss' in value progressively increases as variation increases from the intended condition. This was considered a breakthrough in describing quality, and helped fuel thecontinuous improvementmovement. The concept of Taguchi's quality loss function was in contrast with the American concept of quality, popularly known as goal post philosophy, the concept given by American quality guruPhil Crosby. Goal post philosophy emphasizes that if a product feature doesn't meet the designed specifications it is termed as a product of poor quality (rejected), irrespective of amount of deviation from the target value (mean value of tolerance zone). This concept has similarity with the concept of scoring a 'goal' in the game of football or hockey, because a goal is counted 'one' irrespective of the location of strike of the ball in the 'goal post', whether it is in the center or towards the corner. This means that if the product dimension goes out of the tolerance limit the quality of the product drops suddenly. Through his concept of the quality loss function, Taguchi explained that from the customer's point of view this drop of quality is not sudden. The customer experiences a loss of quality the moment product specification deviates from the 'target value'. This 'loss' is depicted by a quality loss function and it follows a parabolic curve mathematically given byL = k(y–m)2, wheremis the theoretical 'target value' or 'mean value' andyis the actual size of the product,kis a constant andLis the loss. This means that if the difference between 'actual size' and 'target value' i.e. (y–m) is large, loss would be more, irrespective of tolerance specifications. In Taguchi's view tolerance specifications are given by engineers and not by customers; what the customer experiences is 'loss'. This equation is true for a single product; if 'loss' is to be calculated for multiple products the loss function is given byL = k[S2+ (y¯{\displaystyle {\bar {y}}}– m)2], whereS2is the 'variance of product size' andy¯{\displaystyle {\bar {y}}}is the average product size. The Taguchi loss function is important for a number of reasons—primarily, to help engineers better understand the importance of designing forvariation. Taguchi also focus on Robust design of model.
https://en.wikipedia.org/wiki/Taguchi_loss_function
Low-energy adaptive clustering hierarchy ("LEACH")[1]is aTDMA-basedMACprotocol which is integrated with clustering and a simple routing protocol inwireless sensor networks(WSNs). The goal of LEACH is to lower the energy consumption required to create and maintain clusters in order to improve the life time of a wireless sensor network. LEACH is a hierarchical protocol in which most nodes transmit to cluster heads, and the cluster heads aggregate and compress the data and forward it to the base station (sink). Each node uses astochasticalgorithm at each round to determine whether it will become a cluster head in this round. LEACH assumes that each node has a radio powerful enough to directly reach the base station or the nearest cluster head, but that using this radio at full power all the time would waste energy. Nodes that have been cluster heads cannot become cluster heads again forProunds, wherePis the desired percentage of cluster heads. Thereafter, each node has a 1/Pprobability of becoming a cluster head again. At the end of each round, each node that is not a cluster head selects the closest cluster head and joins that cluster. The cluster head then creates a schedule for each node in its cluster to transmit its data. All nodes that are not cluster heads only communicate with the cluster head in a TDMA fashion, according to the schedule created by the cluster head. They do so using the minimum energy needed to reach the cluster head, and only need to keep their radios on during their time slot. LEACH also usesCDMAso that each cluster uses a different set of CDMA codes, to minimize interference between clusters. Properties of this algorithm include: Shortcomings of LEACH include:[2]
https://en.wikipedia.org/wiki/Low-energy_adaptive_clustering_hierarchy
Aneural processing unit(NPU), also known asAI acceleratorordeep learning processor,is a class of specializedhardware accelerator[1]or computer system[2][3]designed to accelerateartificial intelligence(AI) andmachine learningapplications, includingartificial neural networksandcomputer vision. Their purpose is either to efficiently execute already trained AI models (inference) or to train AI models. Their applications includealgorithmsforrobotics,Internet of things, anddata-intensive or sensor-driven tasks.[4]They are oftenmanycoredesigns and focus onlow-precisionarithmetic, noveldataflow architectures, orin-memory computingcapability. As of 2024[update], a typical AIintegrated circuitchipcontains tens of billionsofMOSFETs.[5] AI accelerators are used in mobile devices such as AppleiPhonesandHuaweicellphones,[6]and personal computers such asIntellaptops,[7]AMDlaptops[8]andApple siliconMacs.[9]Accelerators are used incloud computingservers, includingtensor processing units(TPU) inGoogle Cloud Platform[10]andTrainiumandInferentiachips inAmazon Web Services.[11]Many vendor-specific terms exist for devices in this category, and it is anemerging technologywithout adominant design. Graphics processing unitsdesigned by companies such asNvidiaandAMDoften include AI-specific hardware, and are commonly used as AI accelerators, both fortrainingandinference.[12]All models of IntelMeteor Lakeprocessors have a built-inversatile processor unit(VPU) for acceleratinginferencefor computer vision and deep learning.[13] This computing article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/AI_accelerator
Aphysical neural networkis a type ofartificial neural networkin which an electrically adjustable material is used to emulate the function of aneural synapseor a higher-order (dendritic) neuron model.[1]"Physical" neural network is used to emphasize the reliance on physical hardware used to emulateneuronsas opposed to software-based approaches. More generally the term is applicable to other artificial neural networks in which amemristoror other electrically adjustable resistance material is used to emulate a neural synapse.[2][3] In the 1960sBernard WidrowandTed HoffdevelopedADALINE(Adaptive Linear Neuron) which used electrochemical cells calledmemistors(memory resistors) to emulate synapses of an artificial neuron.[4]The memistors were implemented as 3-terminal devices operating based on the reversible electroplating of copper such that the resistance between two of the terminals is controlled by the integral of the current applied via the third terminal. The ADALINE circuitry was briefly commercialized by the Memistor Corporation in the 1960s enabling some applications in pattern recognition. However, since the memistors were not fabricated using integrated circuit fabrication techniques the technology was not scalable and was eventually abandoned assolid-state electronicsbecame mature.[5] In 1989Carver Meadpublished his bookAnalog VLSI and Neural Systems,[6]which spun off perhaps the most common variant of analog neural networks. The physical realization is implemented inanalog VLSI. This is often implemented as field effect transistors in low inversion. Such devices can be modelled astranslinear circuits. This is a technique described byBarrie Gilbertin several papers around mid 1970th, and in particular hisTranslinear Circuitsfrom 1981.[7][8]With this method circuits can be analyzed as a set of well-defined functions in steady-state, and such circuits assembled into complex networks. Alex Nugent describes a physical neural network as one or more nonlinear neuron-like nodes used to sum signals and nanoconnections formed from nanoparticles, nanowires, or nanotubes which determine the signal strength input to the nodes.[9]Alignment or self-assembly of the nanoconnections is determined by the history of the applied electric field performing a function analogous to neural synapses. Numerous applications[10]for such physical neural networks are possible. For example, a temporal summation device[11]can be composed of one or more nanoconnections having an input and an output thereof, wherein an input signal provided to the input causes one or more of the nanoconnection to experience an increase in connection strength thereof over time. Another example of a physical neural network is taught by U.S. Patent No. 7,039,619[12]entitled "Utilizednanotechnologyapparatus using a neural network, a solution and a connection gap," which issued to Alex Nugent by theU.S. Patent & Trademark Officeon May 2, 2006.[13] A further application of physical neural network is shown in U.S. Patent No. 7,412,428 entitled "Application ofhebbianand anti-hebbian learning to nanotechnology-based physical neural networks," which issued on August 12, 2008.[14] Nugent and Molter have shown that universal computing and general-purpose machine learning are possible from operations available through simple memristive circuits operating the AHaH plasticity rule.[15]More recently, it has been argued that also complex networks of purely memristive circuits can serve as neural networks.[16][17] In 2002,Stanford Ovshinskydescribed an analog neural computing medium in whichphase-change materialhas the ability to cumulatively respond to multiple input signals.[18]An electrical alteration of the resistance of the phase change material is used to control the weighting of the input signals. Greg Snider ofHP Labsdescribes a system of cortical computing with memristive nanodevices.[19]Thememristors(memory resistors) are implemented by thin film materials in which the resistance is electrically tuned via the transport of ions or oxygen vacancies within the film.DARPA'sSyNAPSE projecthas funded IBM Research and HP Labs, in collaboration with the Boston University Department of Cognitive and Neural Systems (CNS), to develop neuromorphic architectures which may be based on memristive systems.[20] In 2022, researchers reported the development ofnanoscalebrain-inspiredartificial synapses, usingthe ion proton(H+), for 'analogdeep learning'.[21][22]
https://en.wikipedia.org/wiki/Physical_neural_network
Data miningis the process of extracting and finding patterns in massivedata setsinvolving methods at the intersection ofmachine learning,statistics, anddatabase systems.[1]Data mining is aninterdisciplinarysubfield ofcomputer scienceandstatisticswith an overall goal of extracting information (with intelligent methods) from a data set and transforming the information into a comprehensible structure for further use.[1][2][3][4]Data mining is the analysis step of the "knowledge discovery in databases" process, or KDD.[5]Aside from the raw analysis step, it also involves database anddata managementaspects,data pre-processing,modelandinferenceconsiderations, interestingness metrics,complexityconsiderations, post-processing of discovered structures,visualization, andonline updating.[1] The term "data mining" is amisnomerbecause the goal is the extraction ofpatternsand knowledge from large amounts of data, not theextraction (mining) of data itself.[6]It also is abuzzword[7]and is frequently applied to any form of large-scale data orinformation processing(collection,extraction,warehousing, analysis, and statistics) as well as any application ofcomputer decision support systems, includingartificial intelligence(e.g., machine learning) andbusiness intelligence. Often the more general terms (large scale)data analysisandanalytics—or, when referring to actual methods,artificial intelligenceandmachine learning—are more appropriate. The actual data mining task is the semi-automaticor automatic analysis of massive quantities of data to extract previously unknown, interesting patterns such as groups of data records (cluster analysis), unusual records (anomaly detection), anddependencies(association rule mining,sequential pattern mining). This usually involves using database techniques such asspatial indices. These patterns can then be seen as a kind of summary of the input data, and may be used in further analysis or, for example, in machine learning andpredictive analytics. For example, the data mining step might identify multiple groups in the data, which can then be used to obtain more accurate prediction results by adecision support system. Neither the data collection, data preparation, nor result interpretation and reporting is part of the data mining step, although they do belong to the overall KDD process as additional steps. The difference betweendata analysisand data mining is that data analysis is used to test models and hypotheses on the dataset, e.g., analyzing the effectiveness of amarketing campaign, regardless of the amount of data. In contrast, data mining uses machine learning and statistical models to uncover clandestine or hidden patterns in a large volume of data.[8] The related termsdata dredging,data fishing, anddata snoopingrefer to the use of data mining methods to sample parts of a larger population data set that are (or may be) too small for reliable statistical inferences to be made about the validity of any patterns discovered. These methods can, however, be used in creating new hypotheses to test against the larger data populations. In the 1960s, statisticians and economists used terms likedata fishingordata dredgingto refer to what they considered the bad practice of analyzing data without ana-priorihypothesis. The term "data mining" was used in a similarly critical way by economistMichael Lovellin an article published in theReview of Economic Studiesin 1983.[9][10]Lovell indicates that the practice "masquerades under a variety of aliases, ranging from "experimentation" (positive) to "fishing" or "snooping" (negative). The termdata miningappeared around 1990 in the database community, with generally positive connotations. For a short time in 1980s, the phrase "database mining"™, was used, but since it was trademarked by HNC, aSan Diego–based company, to pitch their Database Mining Workstation;[11]researchers consequently turned todata mining. Other terms used includedata archaeology,information harvesting,information discovery,knowledge extraction, etc.Gregory Piatetsky-Shapirocoined the term "knowledge discovery in databases" for the first workshop on the same topic(KDD-1989)and this term became more popular in theAIandmachine learningcommunities. However, the term data mining became more popular in the business and press communities.[12]Currently, the termsdata miningandknowledge discoveryare used interchangeably. The manual extraction of patterns fromdatahas occurred for centuries. Early methods of identifying patterns in data includeBayes' theorem(1700s) andregression analysis(1800s).[13]The proliferation, ubiquity and increasing power of computer technology have dramatically increased data collection, storage, and manipulation ability. Asdata setshave grown in size and complexity, direct "hands-on" data analysis has increasingly been augmented with indirect, automated data processing, aided by other discoveries in computer science, specially in the field of machine learning, such asneural networks,cluster analysis,genetic algorithms(1950s),decision treesanddecision rules(1960s), andsupport vector machines(1990s). Data mining is the process of applying these methods with the intention of uncovering hidden patterns.[14]in large data sets. It bridges the gap fromapplied statisticsand artificial intelligence (which usually provide the mathematical background) todatabase managementby exploiting the way data is stored and indexed in databases to execute the actual learning and discovery algorithms more efficiently, allowing such methods to be applied to ever-larger data sets. Theknowledge discovery in databases (KDD) processis commonly defined with the stages: It exists, however, in many variations on this theme, such as theCross-industry standard process for data mining(CRISP-DM) which defines six phases: or a simplified process such as (1) Pre-processing, (2) Data Mining, and (3) Results Validation. Polls conducted in 2002, 2004, 2007 and 2014 show that the CRISP-DM methodology is the leading methodology used by data miners.[15][16][17][18] The only other data mining standard named in these polls wasSEMMA. However, 3–4 times as many people reported using CRISP-DM. Several teams of researchers have published reviews of data mining process models,[19]and Azevedo and Santos conducted a comparison of CRISP-DM and SEMMA in 2008.[20] Before data mining algorithms can be used, a target data set must be assembled. As data mining can only uncover patterns actually present in the data, the target data set must be large enough to contain these patterns while remaining concise enough to be mined within an acceptable time limit. A common source for data is adata martordata warehouse. Pre-processing is essential to analyze themultivariatedata sets before data mining. The target set is then cleaned. Data cleaning removes the observations containingnoiseand those withmissing data. Data mining involves six common classes of tasks:[5] Data mining can unintentionally be misused, producing results that appear to be significant but which do not actually predict future behavior and cannot bereproducedon a new sample of data, therefore bearing little use. This is sometimes caused by investigating too many hypotheses and not performing properstatistical hypothesis testing. A simple version of this problem inmachine learningis known asoverfitting, but the same problem can arise at different phases of the process and thus a train/test split—when applicable at all—may not be sufficient to prevent this from happening.[21] The final step of knowledge discovery from data is to verify that the patterns produced by the data mining algorithms occur in the wider data set. Not all patterns found by the algorithms are necessarily valid. It is common for data mining algorithms to find patterns in the training set which are not present in the general data set. This is calledoverfitting. To overcome this, the evaluation uses atest setof data on which the data mining algorithm was not trained. The learned patterns are applied to this test set, and the resulting output is compared to the desired output. For example, a data mining algorithm trying to distinguish "spam" from "legitimate" e-mails would be trained on atraining setof sample e-mails. Once trained, the learned patterns would be applied to the test set of e-mails on which it hadnotbeen trained. The accuracy of the patterns can then be measured from how many e-mails they correctly classify. Several statistical methods may be used to evaluate the algorithm, such asROC curves. If the learned patterns do not meet the desired standards, it is necessary to re-evaluate and change the pre-processing and data mining steps. If the learned patterns do meet the desired standards, then the final step is to interpret the learned patterns and turn them into knowledge. The premier professional body in the field is theAssociation for Computing Machinery's (ACM) Special Interest Group (SIG) on Knowledge Discovery and Data Mining (SIGKDD).[22][23]Since 1989, this ACM SIG has hosted an annual international conference and published its proceedings,[24]and since 1999 it has published a biannualacademic journaltitled "SIGKDD Explorations".[25] Computer science conferences on data mining include: Data mining topics are also present in manydata management/database conferencessuch as the ICDE Conference,SIGMOD ConferenceandInternational Conference on Very Large Data Bases. There have been some efforts to define standards for the data mining process, for example, the 1999 EuropeanCross Industry Standard Process for Data Mining(CRISP-DM 1.0) and the 2004Java Data Miningstandard (JDM 1.0). Development on successors to these processes (CRISP-DM 2.0 and JDM 2.0) was active in 2006 but has stalled since. JDM 2.0 was withdrawn without reaching a final draft. For exchanging the extracted models—in particular for use inpredictive analytics—the key standard is thePredictive Model Markup Language(PMML), which is anXML-based language developed by the Data Mining Group (DMG) and supported as exchange format by many data mining applications. As the name suggests, it only covers prediction models, a particular data mining task of high importance to business applications. However, extensions to cover (for example)subspace clusteringhave been proposed independently of the DMG.[26] Data mining is used wherever there is digital data available. Notableexamples of data miningcan be found throughout business, medicine, science, finance, construction, and surveillance. While the term "data mining" itself may have no ethical implications, it is often associated with the mining of information in relation touser behavior(ethical and otherwise).[27] The ways in which data mining can be used can in some cases and contexts raise questions regardingprivacy, legality, andethics.[28]In particular, data mining government or commercial data sets fornational securityorlaw enforcementpurposes, such as in theTotal Information AwarenessProgram or inADVISE, has raised privacy concerns.[29][30] Data mining requires data preparation which uncovers information or patterns which compromiseconfidentialityandprivacyobligations. A common way for this to occur is throughdata aggregation.Data aggregationinvolves combining data together (possibly from various sources) in a way that facilitates analysis (but that also might make identification of private, individual-level data deducible or otherwise apparent).[31]This is not data miningper se, but a result of the preparation of data before—and for the purposes of—the analysis. The threat to an individual's privacy comes into play when the data, once compiled, cause the data miner, or anyone who has access to the newly compiled data set, to be able to identify specific individuals, especially when the data were originally anonymous.[32] It is recommended[according to whom?]to be aware of the followingbeforedata are collected:[31] Data may also be modified so as tobecomeanonymous, so that individuals may not readily be identified.[31]However, even "anonymized" data sets can potentially contain enough information to allow identification of individuals, as occurred when journalists were able to find several individuals based on a set of search histories that were inadvertently released by AOL.[33] The inadvertent revelation ofpersonally identifiable informationleading to the provider violates Fair Information Practices. This indiscretion can cause financial, emotional, or bodily harm to the indicated individual. In one instance ofprivacy violation, the patrons of Walgreens filed a lawsuit against the company in 2011 for selling prescription information to data mining companies who in turn provided the data to pharmaceutical companies.[34] Europehas rather strong privacy laws, and efforts are underway to further strengthen the rights of the consumers. However, theU.S.–E.U. Safe Harbor Principles, developed between 1998 and 2000, currently effectively expose European users to privacy exploitation by U.S. companies. As a consequence ofEdward Snowden'sglobal surveillance disclosure, there has been increased discussion to revoke this agreement, as in particular the data will be fully exposed to theNational Security Agency, and attempts to reach an agreement with the United States have failed.[35] In the United Kingdom in particular there have been cases of corporations using data mining as a way to target certain groups of customers forcing them to pay unfairly high prices. These groups tend to be people of lower socio-economic status who are not savvy to the ways they can be exploited in digital market places.[36] In the United States, privacy concerns have been addressed by theUS Congressvia the passage of regulatory controls such as theHealth Insurance Portability and Accountability Act(HIPAA). The HIPAA requires individuals to give their "informed consent" regarding information they provide and its intended present and future uses. According to an article inBiotech Business Week, "'[i]n practice, HIPAA may not offer any greater protection than the longstanding regulations in the research arena,' says the AAHC. More importantly, the rule's goal of protection through informed consent is approach a level of incomprehensibility to average individuals."[37]This underscores the necessity for data anonymity in data aggregation and mining practices. U.S. information privacy legislation such as HIPAA and theFamily Educational Rights and Privacy Act(FERPA) applies only to the specific areas that each such law addresses. The use of data mining by the majority of businesses in the U.S. is not controlled by any legislation. UnderEuropean copyrightdatabase laws, the mining of in-copyright works (such as byweb mining) without the permission of the copyright owner is not legal. Where a database is pure data in Europe, it may be that there is no copyright—but database rights may exist, so data mining becomes subject tointellectual propertyowners' rights that are protected by theDatabase Directive. On the recommendation of theHargreaves review, this led to the UK government to amend its copyright law in 2014 to allow content mining as alimitation and exception.[38]The UK was the second country in the world to do so after Japan, which introduced an exception in 2009 for data mining. However, due to the restriction of theInformation Society Directive(2001), the UK exception only allows content mining for non-commercial purposes. UK copyright law also does not allow this provision to be overridden by contractual terms and conditions. Since 2020 also Switzerland has been regulating data mining by allowing it in the research field under certain conditions laid down by art. 24d of the Swiss Copyright Act. This new article entered into force on 1 April 2020.[39] TheEuropean Commissionfacilitated stakeholder discussion on text and data mining in 2013, under the title of Licences for Europe.[40]The focus on the solution to this legal issue, such as licensing rather than limitations and exceptions, led to representatives of universities, researchers, libraries, civil society groups andopen accesspublishers to leave the stakeholder dialogue in May 2013.[41] US copyright law, and in particular its provision forfair use, upholds the legality of content mining in America, and other fair use countries such as Israel, Taiwan and South Korea. As content mining is transformative, that is it does not supplant the original work, it is viewed as being lawful under fair use. For example, as part of theGoogle Book settlementthe presiding judge on the case ruled that Google's digitization project of in-copyright books was lawful, in part because of the transformative uses that the digitization project displayed—one being text and data mining.[42] The following applications are available under free/open-source licenses. Public access to application source code is also available. The following applications are available under proprietary licenses. For more information about extracting information out of data (as opposed toanalyzingdata), see:
https://en.wikipedia.org/wiki/Data_Mining
Associationismis the idea thatmental processesoperate by theassociationof one mental state with its successor states.[1]It holds that all mental processes are made up of discrete psychological elements and their combinations, which are believed to be made up of sensations or simple feelings.[2]In philosophy, this idea is viewed as the outcome ofempiricismandsensationism.[3]The concept encompasses a psychological theory as well as comprehensive philosophical foundation and scientific methodology.[2] The idea is first recorded inPlatoandAristotle, especially with regard to the succession of memories. Particularly, the model is traced back to the Aristotelian notion that human memory encompasses all mental phenomena. The model was discussed in detail in the philosopher's work,Memory and Reminiscence.[4]This view was then widely embraced until the emergence of British associationism, which began withThomas Hobbes.[4] Members of the Associationist School, includingJohn Locke,David Hume,David Hartley,Joseph Priestley,James Mill,John Stuart Mill,Alexander Bain, andIvan Pavlov, asserted that the principle applied to all or most mental processes.[5] The phrase "association of ideas" was first used by John Locke in 1689. In chapter 33 ofAn Essay Concerning Human Understanding, which is entitled “Of the Association of Ideas″, he describes the ways that ideas can be connected to each other.[6]He writes, "Some of our ideas have a natural correspondence and connection with one another."[7] Although he believed that some associations were natural and justified, he believed that others were illogical, causing errors in judgment. He also explains that one can associate some ideas together based on their education and culture, saying, "there is another connection of ideas wholly owing to chance or custom".[6][7]The termassociationismlater became more prominent in psychology and the psychologists who subscribed to the idea became known as "the associationists".[6]Locke's view that the mind and body are two aspects of the same unified phenomenon can be traced back to Aristotle's ideas on the subject.[8] In his 1740 bookTreatise on Human NatureDavid Hume outlines three principles for ideas to be connected to each other: resemblance, continuity in time or place, and cause or effect.[9]He argues that the mind uses these principles, rather than reason, to traverse from idea to idea.[6]He writes “When the mind, therefore, passes from the idea or impression of one object to the idea or belief of another, it is not determined by reason, but by certain principles, which associate together the ideas of these objects, and unite them in the imagination.”[9]These connections are formed in the mind by observation and experience. Hume does not believe that any of these associations are “necessary’ in a sense that ideas or object are truly connected, instead he sees them as mental tools used for creating a useful mental representation of the world.[6] Later members of the school developed very specific principles elaborating how associations worked and even a physiological mechanism bearing no resemblance to modernneurophysiology.[10]For a fuller explanation of the intellectual history of associationism and the "Associationist School", seeAssociation of Ideas. Associationism is often concerned with middle-level to higher-level mental processes such aslearning.[8]For instance, the thesis, antithesis, and synthesis are linked in one's mind through repetition so that they become inextricably associated with one another.[8]Among the earliest experiments that tested the applications of associationism, involveHermann Ebbinghaus' work. He was considered the first experimenter to apply the associationist principles systematically, and used himself as subject to study and quantify the relationship between rehearsal and recollection of material.[8] Some of the ideas of the Associationist School also anticipated the principles ofconditioningand its use inbehavioral psychology.[5]Bothclassical conditioningandoperant conditioninguse positive and negativeassociationsas means of conditioning.[10]
https://en.wikipedia.org/wiki/Associationism
Behaviorismis a systematic approach to understand the behavior of humans and other animals.[1][2]It assumes that behavior is either areflexelicited by the pairing of certainantecedent stimuliin the environment, or a consequence of that individual's history, including especiallyreinforcementandpunishmentcontingencies, together with the individual's currentmotivational stateandcontrolling stimuli. Although behaviorists generally accept the important role ofheredityin determining behavior, deriving from Skinner's two levels of selection: phylogeny and ontogeny.[3]they focus primarily on environmental events. Thecognitive revolutionof the late 20th century largely replaced behaviorism as an explanatory theory withcognitive psychology, which unlike behaviorism views internal mental states as explanations for observable behavior. Behaviorism emerged in the early 1900s as a reaction todepth psychologyand other traditional forms of psychology, which often had difficulty making predictions that could be tested experimentally. It was derived from earlier research in the late nineteenth century, such as whenEdward Thorndikepioneered thelaw of effect, a procedure that involved the use of consequences to strengthen or weaken behavior. With a 1924 publication,John B. Watsondevised methodological behaviorism, which rejectedintrospective methodsand sought to understand behavior by only measuring observable behaviors and events. It was not until 1945 thatB. F. Skinnerproposed that covert behavior—includingcognitionandemotions—are subject to the same controlling variables as observable behavior,[4]which became the basis for his philosophy calledradicalbehaviorism.[5][6]While Watson andIvan Pavlovinvestigated how (conditioned) neutral stimuli elicit reflexes inrespondent conditioning, Skinner assessed the reinforcement histories of the discriminative (antecedent) stimuli that emits behavior; the process became known asoperant conditioning. The application of radical behaviorism—known asapplied behavior analysis—is used in a variety of contexts, including, for example, applied animal behavior andorganizational behavior managementto treatment of mental disorders, such asautismandsubstance abuse.[7][8]In addition, while behaviorism andcognitiveschools of psychological thought do not agree theoretically, they have complemented each other in thecognitive-behavioral therapies, which have demonstrated utility in treating certain pathologies, including simplephobias,PTSD, andmood disorders. The titles given to the various branches of behaviorism include: Two subtypes of theoretical behaviorism are: B. F. Skinner proposed radical behaviorism as the conceptual underpinning of theexperimental analysis of behavior. This viewpoint differs from other approaches to behavioral research in various ways, but, most notably here, it contrasts with methodological behaviorism in accepting feelings, states of mind and introspection as behaviors also subject to scientific investigation. Like methodological behaviorism, it rejects the reflex as a model of all behavior, and it defends the science of behavior as complementary to but independent of physiology. Radical behaviorism overlaps considerably with other western philosophical positions, such as Americanpragmatism.[15] Although John B. Watson mainly emphasized his position of methodological behaviorism throughout his career, Watson and Rosalie Rayner conducted the infamousLittle Albert experiment(1920), a study in whichIvan Pavlov'stheoryto respondent conditioning was first applied to eliciting a fearful reflex of crying in a human infant, and this became the launching point for understanding covert behavior (or private events) inradicalbehaviorism;[16]however, Skinner felt that aversive stimuli should only be experimented on with animals and spoke out against Watson for testing something so controversial on a human.[citation needed] In 1959, Skinner observed the emotions of two pigeons by noting that they appeared angry because their feathers ruffled. The pigeons were placed together in an operant chamber, where they were aggressive as a consequence of previousreinforcementin the environment. Throughstimulus controland subsequent discrimination training, whenever Skinner turned off the green light, the pigeons came to notice that the foodreinforcer is discontinuedfollowing each peck and responded without aggression. Skinner concluded that humans also learn aggression and possess such emotions (as well as other private events) no differently than do nonhuman animals.[citation needed] As experimental behavioural psychology is related tobehavioral neuroscience, we can date the first researches in the area were done in the beginning of 19th century.[17]Later, this essentially philosophical position gained strength from the success of Skinner's early experimental work with rats and pigeons, summarized in his booksThe Behavior of OrganismsandSchedules of Reinforcement.[18][19]Of particular importance was his concept of the operant response, of which the canonical example was the rat's lever-press. In contrast with the idea of a physiological or reflex response, an operant is a class of structurally distinct but functionally equivalent responses. For example, while a rat might press a lever with its left paw or its right paw or its tail, all of these responses operate on the world in the same way and have a common consequence. Operants are often thought of as species of responses, where the individuals differ but the class coheres in its function-shared consequences with operants and reproductive success with species. This is a clear distinction between Skinner's theory andS–R theory. Skinner's empirical work expanded on earlier research ontrial-and-errorlearning by researchers such as Thorndike and Guthrie with both conceptual reformulations—Thorndike's notion of a stimulus-response "association" or "connection" was abandoned; and methodological ones—the use of the "free operant", so-called because the animal was now permitted to respond at its own rate rather than in a series of trials determined by the experimenter procedures. With this method, Skinner carried out substantial experimental work on the effects of different schedules and rates of reinforcement on the rates of operant responses made by rats and pigeons. He achieved remarkable success in training animals to perform unexpected responses, to emit large numbers of responses, and to demonstrate many empirical regularities at the purely behavioral level. This lent some credibility to his conceptual analysis. It is largely his conceptual analysis that made his work much more rigorous than his peers, a point which can be seen clearly in his seminal workAre Theories of Learning Necessary?in which he criticizes what he viewed to be theoretical weaknesses then common in the study of psychology. An important descendant of the experimental analysis of behavior is theSociety for Quantitative Analysis of Behavior.[20][21] As Skinner turned from experimental work to concentrate on the philosophical underpinnings of a science of behavior, his attention turned to human language with his 1957 bookVerbal Behavior[22]and other language-related publications;[23]Verbal Behaviorlaid out a vocabulary and theory for functional analysis of verbal behavior, and was strongly criticized in a review byNoam Chomsky.[24][25] Skinner did not respond in detail but claimed that Chomsky failed to understand his ideas,[26]and the disagreements between the two and the theories involved have been further discussed.[27][28][29][30][31][32]Innateness theory, which has been heavily critiqued,[33][34]is opposed to behaviorist theory which claims that language is a set of habits that can be acquired by means of conditioning.[35][36][37]According to some, the behaviorist account is a process which would be too slow to explain a phenomenon as complicated as language learning. What was important for a behaviorist's analysis of human behavior was notlanguage acquisitionso much as the interaction between language and overt behavior. In an essay republished in his 1969 bookContingencies of Reinforcement,[23]Skinner took the view that humans could construct linguistic stimuli that would then acquire control over their behavior in the same way that external stimuli could. The possibility of such "instructional control" over behavior meant that contingencies of reinforcement would not always produce the same effects on human behavior as they reliably do in other animals. The focus of a radical behaviorist analysis of human behavior therefore shifted to an attempt to understand the interaction between instructional control and contingency control, and also to understand the behavioral processes that determine what instructions are constructed and what control they acquire over behavior. Recently, a new line of behavioral research on language was started under the name ofrelational frame theory.[38][39][40][41] B. F. Skinner's bookVerbal Behavior(1957) does not quite emphasize on language development, but to understand human behavior. Additionally, his work serves in understanding social interactions in the child's early developmental stages focusing on the topic of caregiver-infant interaction.[42]Skinner's functional analysis of verbal behavior terminology and theories is commonly used to understand the relationship between language development but was primarily designed to describe behaviors of interest and explain the cause of those behaviors.[42]Noam Chomsky, an American linguistic professor, has criticized and questioned Skinner's theories about the possible suggestion of parental tutoring in language development. However, there is a lack of supporting evidence where Skinner makes the statement.[42]Understanding language is a complex topic but can be understood through the use of two theories: innateness and acquisition. Both theories offer a different perspective whether language is inherently "acquired" or "learned".[43] Operant conditioningwas developed byB.F. Skinnerin 1938 and is form of learning in which the frequency of a behavior is controlled by consequences to change behavior.[44][18][45][46]In other words, behavior is controlled by historical consequential contingencies, particularlyreinforcement—a stimulus that increases the probability of performing behaviors, andpunishment—a stimulus that decreases such probability.[44]The core tools of consequences are either positive (presenting stimuli following a response), or negative (withdrawn stimuli following a response).[47] The following descriptions explains the concepts of four common types of consequences in operant conditioning:[48] A classical experiment in operant conditioning, for example, is theSkinner Box, "puzzle box" oroperant conditioning chamberto test the effects of operant conditioning principles on rats, cats and other species. From this experiment, he discovered that the rats learned very effectively if they were rewarded frequently with food. Skinner also found that he couldshape(create new behavior) the rats' behavior through the use of rewards, which could, in turn, be applied to human learning as well. Skinner's model was based on the premise that reinforcement is used for the desired actions or responses while punishment was used to stop the responses of the undesired actions that are not. This theory proved that humans or animals will repeat any action that leads to a positive outcome, and avoid any action that leads to a negative outcome. The experiment with the pigeons showed that a positive outcome leads to learned behavior since the pigeon learned to peck the disc in return for the reward of food. These historical consequential contingencies subsequently lead to (antecedent)stimulus control, but in contrast to respondent conditioning where antecedent stimuli elicit reflexive behavior, operant behavior is only emitted and therefore does not force its occurrence. It includes the following controlling stimuli:[48] Althoughoperant conditioningplays the largest role in discussions of behavioral mechanisms,respondent conditioning(also called Pavlovian or classical conditioning) is also an important behavior-analytic process that needs not refer to mental or other internal processes. Pavlov's experiments with dogs provide the most familiar example of the classical conditioning procedure. In the beginning, the dog was provided meat (unconditioned stimulus, UCS, naturally elicit a response that is not controlled) to eat, resulting in increased salivation (unconditioned response, UCR, which means that a response is naturally caused by UCS). Afterward, a bell ring was presented together with food to the dog. Although bell ring was a neutral stimulus (NS, meaning that the stimulus did not have any effect), dog would start to salivate when only hearing a bell ring after a number of pairings. Eventually, the neutral stimulus (bell ring) became conditioned. Therefore, salivation was elicited as a conditioned response (the response same as the unconditioned response), pairing up with meat—the conditioned stimulus)[51]Although Pavlov proposed some tentative physiological processes that might be involved in classical conditioning, these have not been confirmed.[52]The idea of classical conditioning helped behaviorist John Watson discover the key mechanism behind how humans acquire the behaviors that they do, which was to find a natural reflex that produces the response being considered. Watson's "Behaviourist Manifesto" has three aspects that deserve special recognition: one is that psychology should be purely objective, with any interpretation of conscious experience being removed, thus leading to psychology as the "science of behaviour"; the second one is that the goals of psychology should be to predict and control behaviour (as opposed to describe and explain conscious mental states); the third one is that there is no notable distinction between human and non-human behaviour. Following Darwin's theory of evolution, this would simply mean that human behaviour is just a more complex version in respect to behaviour displayed by other species.[53] Behaviorism is a psychological movement that can be contrasted withphilosophy of mind.[54][55][56]The basic premise of behaviorism is that the study of behavior should be anatural science, such aschemistryorphysics.[57][58]Initially behaviorism rejected any reference to hypothetical inner states of organisms as causes for their behavior, but B.F. Skinner's radical behaviorism reintroduced reference to inner states and also advocated for the study of thoughts and feelings as behaviors subject to the same mechanisms as external behavior.[57][58]Behaviorism takes a functional view of behavior. According toEdmund Fantinoand colleagues: "Behavior analysis has much to offer the study of phenomena normally dominated by cognitive and social psychologists. We hope that successful application of behavioral theory and methodology will not only shed light on central problems in judgment and choice but will also generate greater appreciation of the behavioral approach."[59] Behaviorist sentiments are not uncommon withinphilosophy of languageandanalytic philosophy. It is sometimes argued thatLudwig Wittgensteindefended alogical behavioristposition[10](e.g., thebeetle in a boxargument). Inlogical positivism(as held, e.g., byRudolf Carnap[10]andCarl Hempel),[10]the meaning of psychological statements are their verification conditions, which consist of performed overt behavior.W. V. O. Quinemade use of a type of behaviorism,[10]influenced by some of Skinner's ideas, in his own work on language. Quine's work in semantics differed substantially from the empiricist semantics of Carnap which he attempted to create an alternative to, couching his semantic theory in references to physical objects rather than sensations.Gilbert Ryledefended a distinct strain of philosophical behaviorism, sketched in his bookThe Concept of Mind.[10]Ryle's central claim was that instances of dualism frequently represented "category mistakes", and hence that they were really misunderstandings of the use of ordinary language.Daniel Dennettlikewise acknowledges himself to be a type of behaviorist,[60]though he offers extensive criticism of radical behaviorism and refutes Skinner's rejection of the value of intentional idioms and the possibility of free will.[61] This is Dennett's main point in "Skinner Skinned". Dennett argues that there is a crucial difference between explaining and explaining away... If our explanation of apparently rational behavior turns out to be extremely simple, we may want to say that the behavior was not really rational after all. But if the explanation is very complex and intricate, we may want to say not that the behavior is not rational, but that we now have a better understanding of what rationality consists in. (Compare: if we find out how a computer program solves problems in linear algebra, we don't say it's not really solving them, we just say we know how it does it. On the other hand, in cases likeWeizenbaum'sELIZAprogram, the explanation of how the computer carries on a conversation is so simple that the right thing to say seems to be that the machine isn't really carrying on a conversation, it's just a trick.) Skinner's view of behavior is most often characterized as a "molecular" view of behavior; that is, behavior can be decomposed into atomistic parts or molecules. This view is inconsistent with Skinner's complete description of behavior as delineated in other works, including his 1981 article "Selection by Consequences".[65]Skinner proposed that a complete account of behavior requires understanding of selection history at three levels:biology(thenatural selectionorphylogenyof the animal); behavior (the reinforcement history or ontogeny of the behavioral repertoire of the animal); and for some species,culture(the cultural practices of the social group to which the animal belongs). This whole organism then interacts with its environment. Molecular behaviorists use notions frommelioration theory,negative power function discountingor additive versions of negative power function discounting.[66]According to Moore,[67]the perseverance in a molecular examination of behavior may be sign of a desire for an in-depth understanding, maybe to identify any underlying mechanism or components that contribute to comples actions. This strategy might involve elements, procedure, or variables that contribute to behaviorism. Molar behaviorists, such asHoward Rachlin,Richard Herrnstein, and William Baum, argue that behavior cannot be understood by focusing on events in the moment. That is, they argue that behavior is best understood as the ultimate product of an organism's history and that molecular behaviorists are committing a fallacy by inventing fictitious proximal causes for behavior. Molar behaviorists argue that standard molecular constructs, such as "associative strength", are better replaced by molar variables such asrate of reinforcement.[68]Thus, a molar behaviorist would describe "loving someone" as a pattern ofloving behaviorover time; there is no isolated, proximal cause of loving behavior, only a history of behaviors (of which the current behavior might be an example) that can be summarized as "love". Skinner's radical behaviorism has been highly successful experimentally, revealing new phenomena with new methods, but Skinner's dismissal of theory limited its development.Theoretical behaviorism[12]recognized that a historical system, an organism, has a state as well as sensitivity to stimuli and the ability to emit responses. Indeed, Skinner himself acknowledged the possibility of what he called "latent" responses in humans, even though he neglected to extend this idea to rats and pigeons.[69]Latent responses constitute a repertoire, from which operant reinforcement can select. Theoretical behaviorism links between the brain and the behavior that provides a real understanding of the behavior, rather than a mental presumption of how brain-behavior relates.[70]The theoretical concept of behaviorism are blended with knowledge of mental structure such as memory and expectancies associated with inflexable behaviorist stances that have traditionally forbidden the examination of the mental state.[71]Because of its flexibility, theoretical behaviorism permits the cognitive process to have an impact on behavior. From its inception, behavior analysis has centered its examination on cultural occurrences (Skinner, 1953,[72]1961,[73]1971,[74]1974[75]). Nevertheless, the methods used to tackle these occurrences have evolved. Initially, culture was perceived as a factor influencing behavior, later becoming a subject of study in itself.[76]This shift prompted research into group practices and the potential for significant behavioral transformations on a larger scale. Following Glenn's (1986) influential work, "Metacontingencies in Walden Two",[77]numerous research endeavors exploring behavior analysis in cultural contexts have centered around the concept of the metacontingency. Glenn (2003) posited that understanding the origins and development of cultures necessitates delving beyond evolutionary and behavioral principles governing species characteristics and individual learned behaviors requires analysis at a major level.[78] With the fast growth of big behavioral data and applications, behavior analysis is ubiquitous. Understanding behavior from the informatics and computing perspective becomes increasingly critical for in-depth understanding of what, why and how behaviors are formed, interact, evolve, change and affect business and decision.Behavior informaticsandbehavior computingdeeply explore behavior intelligence and behavior insights from the informatics and computing perspectives. Pavel et al. (2015) found that in the realm ofhealthcareandhealth psychology, substantial evidence supports the notion that personalized health interventions yield greater effectiveness compared to standardized approaches. Additionally, researchers found that recent progress in sensor and communication technology, coupled with data analysis and computational modeling, holds significant potential in revolutionizing interventions aimed at changing health behavior. Simultaneous advancements in sensor and communication technology, alongside the field ofdata science, have now made it possible to comprehensively measure behaviors occurring in real-life settings. These two elements, when combined with advancements in computational modeling, have laid the groundwork for the emerging discipline known asbehavioral informatics. Behavioral informatics represents a scientific and engineering domain encompassing behavior tracking, evaluation, computational modeling, deduction, and intervention.[79] In the second half of the 20th century, behaviorism was largely eclipsed as a result of thecognitive revolution.[80][81]This shift was due to radical behaviorism being highly criticized for not examining mental processes, and this led to the development of thecognitive therapymovement. In the mid-20th century, three main influences arose that would inspire and shape cognitive psychology as a formal school of thought: In more recent years, several scholars have expressed reservations about the pragmatic tendencies of behaviorism. In the early years ofcognitive psychology, behaviorist critics held that the empiricism it pursued was incompatible with the concept of internal mental states.Cognitive neuroscience, however, continues to gather evidence of direct correlations between physiological brain activity and putative mental states, endorsing the basis for cognitive psychology. Staddon (1993) found thatSkinner'stheory presents two significant deficiencies: Firstly, he downplayed the significance of processes responsible for generating novel behaviors, which it is term as "behavioral variation."Skinnerprimarily emphasized reinforcement as the sole determinant for selecting responses, overlooking these critical processes involved in creating new behaviors. Secondly, bothSkinnerand many other behaviorists of that era endorsed contiguity as a sufficient process for response selection. However,Rescorla and Wagner(1972) later demonstrated, particularly inclassical conditioning, that competition is an essential complement to contiguity. They showed that inoperant conditioning, both contiguity and competition are imperative for discerningcause-and-effectrelationships.[88] The influentialRescorla-Wagner modelhighlights the significance of competition for limited"associative value,"essential for assessing predictability. A similar formal argument was presented by Ying Zhang and John Staddon (1991, in press) concerning operant conditioning: the combination of contiguity and competition among action tendencies suffices as an assignment-of-credit mechanism capable of detecting genuine instrumental contingency between a response and its reinforcer.[89]This mechanism delineates the limitations ofSkinner'sidea of adventitious reinforcement, revealing its efficacy only under stringent conditions – when the reinforcement's strengthening effect is nearly constant across instances and with very short intervals between reinforcers. However, these conditions rarely hold in reality: behavior following reinforcement tends to exhibit high variability, and superstitious behavior diminishes with extremely brief intervals between reinforcements.[88] Behavior therapyis a term referring to different types of therapies that treat mental health disorders. It identifies and helps change people's unhealthy behaviors or destructive behaviors through learning theory and conditioning.Ivan Pavlov's classical conditioning, as well as counterconditioning are the basis for much of clinical behavior therapy, but also includes other techniques, including operant conditioning—or contingency management, and modeling (sometimes calledobservational learning). A frequently noted behavior therapy issystematic desensitization(graduated exposure therapy), which was first demonstrated by Joseph Wolpe and Arnold Lazarus.[90] Applied behavior analysis(ABA)—also called behavioral engineering—is a scientific discipline that applies the principles of behavior analysis to change behavior. ABA derived from much earlier research in theJournal of the Experimental Analysis of Behavior, which was founded by B.F. Skinner and his colleagues atHarvard University. Nearly a decade after the study "The psychiatric nurse as a behavioral engineer" (1959) was published in that journal, which demonstrated how effective thetoken economywas in reinforcing more adaptive behavior for hospitalized patients withschizophreniaandintellectual disability, it led to researchers at theUniversity of Kansasto start theJournal of Applied Behavior Analysisin 1968. Although ABA andbehavior modificationare similar behavior-change technologies in that the learning environment is modified through respondent and operant conditioning, behavior modification did not initially address the causes of the behavior (particularly, the environmental stimuli that occurred in the past), or investigate solutions that would otherwise prevent the behavior from reoccurring. As the evolution of ABA began to unfold in the mid-1980s, functional behavior assessments (FBAs) were developed to clarify the function of that behavior, so that it is accurately determined which differential reinforcement contingencies will be most effective and less likely foraversivepunishmentsto be administered.[16][91][92]In addition, methodological behaviorism was the theory underpinning behavior modification since private events were not conceptualized during the 1970s and early 1980s, which contrasted from the radical behaviorism of behavior analysis. ABA—the term that replaced behavior modification—has emerged into a thriving field.[16][93] The independent development of behaviour analysis outside the United States also continues to develop.[94][95][96][97][98][99]In the US, theAmerican Psychological Association(APA) features a subdivision for Behavior Analysis, titled APA Division 25: Behavior Analysis, which has been in existence since 1964, and the interests among behavior analysts today are wide-ranging, as indicated in a review of the 30 Special Interest Groups (SIGs) within theAssociation for Behavior Analysis International(ABAI). Such interests include everything from animal behavior andenvironmental conservationto classroom instruction (such asdirect instructionandprecision teaching),verbal behavior, developmental disabilities and autism, clinical psychology (i.e.,forensic behavior analysis),behavioral medicine(i.e., behavioral gerontology, AIDS prevention, and fitness training), andconsumer behavior analysis. The field ofapplied animal behavior—a sub-discipline of ABA that involves training animals—is regulated by the Animal Behavior Society, and those who practice this technique are called applied animal behaviorists. Research on applied animal behavior has been frequently conducted in theApplied Animal Behaviour Sciencejournal since its founding in 1974. ABA has also been particularly well-established in the area of developmental disabilities since the 1960s, but it was not until the late 1980s that individuals diagnosed with autism spectrum disorders were beginning to grow so rapidly and groundbreaking research was being published that parent advocacy groups started demanding for services throughout the 1990s, which encouraged the formation of the Behavior Analyst Certification Board, a credentialing program that certifies professionally trained behavior analysts on the national level to deliver such services. Nevertheless, the certification is applicable to all human services related to the rather broad field of behavior analysis (other than the treatment for autism), and the ABAI currently has 14 accredited MA and Ph.D. programs for comprehensive study in that field. Early behavioral interventions (EBIs) based on ABA are empirically validated for teaching children with autism and have been proven as such for over the past five decades. Since the late 1990s and throughout the twenty-first century, early ABA interventions have also been identified as the treatment of choice by theUS Surgeon General,American Academy of Pediatrics, and USNational Research Council. Discrete trial training—also called earlyintensivebehavioral intervention—is the traditional EBI technique implemented for thirty to forty hours per week that instructs a child to sit in a chair, imitate fine and gross motor behaviors, as well as learn eye contact and speech, which are taught throughshaping,modeling, andprompting, with such prompting being phased out as the child begins mastering each skill. When the child becomes more verbal from discrete trials, the table-based instructions are later discontinued, and another EBI procedure known as incidental teaching is introduced in the natural environment by having the child ask for desired items kept out of their direct access, as well as allowing the child to choose the play activities that will motivate them to engage with their facilitators before teaching the child how to interact with other children their own age. A related term for incidental teaching, calledpivotal response treatment(PRT), refers to EBI procedures that exclusively entail twenty-five hours per week of naturalistic teaching (without initially using discrete trials). Current research is showing that there is a wide array of learning styles and that is the children withreceptive languagedelays who initially require discrete trials to acquire speech. Organizational behavior management, which applies contingency management procedures to model and reinforce appropriate work behavior for employees in organizations, has developed a particularly strong following within ABA, as evidenced by the formation of the OBM Network andJournal of Organizational Behavior Management, which was rated the third-highest impact journal in applied psychology by ISI JOBM rating. Modern-dayclinical behavior analysishas also witnessed a massive resurgence in research, with the development ofrelational frame theory(RFT), which is described as an extension of verbal behavior and a "post-Skinnerian account of language and cognition."[100][38][39][40]RFT also forms the empirical basis foracceptance and commitment therapy, a therapeutic approach to counseling often used to manage such conditions asanxietyandobesitythat consists of acceptance and commitment, value-based living, cognitive defusion,counterconditioning(mindfulness), and contingency management (positive reinforcement).[101][102][103][104][105][106]Another evidence-based counseling technique derived from RFT is thefunctional analytic psychotherapyknown asbehavioral activationthat relies on theACL model—awareness, courage, and love—to reinforce more positive moods for those struggling withdepression. Incentive-based contingency management (CM) is the standard of care for adults with substance-use disorders; it has also been shown to be highly effective for other addictions (i.e., obesity and gambling). Although it does not directly address the underlying causes of behavior, incentive-based CM is highly behavior analytic as it targets the function of the client's motivational behavior by relying on a preference assessment, which is an assessment procedure that allows the individual to select the preferred reinforcer (in this case, the monetary value of the voucher, or the use of other incentives, such as prizes). Another evidence-based CM intervention for substance abuse iscommunity reinforcement approach and family trainingthat uses FBAs and counterconditioning techniques—such as behavioral skills training and relapse prevention—to model and reinforce healthier lifestyle choices which promote self-management of abstinence from drugs, alcohol, or cigarette smoking during high-risk exposure when engaging with family members, friends, and co-workers. While schoolwide positive behavior support consists of conducting assessments and atask analysisplan to differentially reinforce curricular supports that replace students' disruptive behavior in the classroom, pediatric feeding therapy incorporates a liquid chaser and chin feeder to shape proper eating behavior for children with feeding disorders.Habit reversal training, an approach firmly grounded in counterconditioning which uses contingency management procedures to reinforce alternative behavior, is currently the only empirically validated approach for managingtic disorders. Some studies on exposure (desensitization) therapies—which refer to an array of interventions based on the respondent conditioning procedure known ashabituationand typically infuses counterconditioning procedures, such asmeditationandbreathing exercises—have recently been published in behavior analytic journals since the 1990s, as most other research is conducted from acognitive-behavior therapyframework. When based on a behavior analytic research standpoint, FBAs are implemented to precisely outline how to employ thefloodingform of desensitization (also called direct exposure therapy) for those who are unsuccessful in overcoming their specificphobiathroughsystematic desensitization(also known asgraduated exposure therapy). These studies also reveal that systematic desensitization is more effective for children if used in conjunction with shaping, which is further termedcontactdesensitization, but this comparison has yet to be substantiated with adults. Other widely published behavior analytic journals includeBehavior Modification,The Behavior Analyst,Journal of Positive Behavior Interventions,Journal of Contextual Behavioral Science,The Analysis of Verbal Behavior,Behavior and Philosophy,Behavior and Social Issues, andThe Psychological Record. Cognitive-behavior therapy(CBT) is a behavior therapy discipline that often overlaps considerably with the clinical behavior analysis subfield of ABA, but differs in that it initially incorporates cognitive restructuring and emotional regulation to alter a person's cognition and emotions. Various forms of CBT have been used to treat physically experienced symptoms that disrupt individuals' livelihood, which often stem from complex mental health disorders. Complications of many trauma-induced disorders result in lack of sleep and nightmares, with cognitive behavior therapy functioning as an intervention found to reduce the average number ofPTSDpatients suffering from related sleep disturbance.[107] A popularly noted counseling intervention known asdialectical behavior therapy(DBT) includes the use of a chain analysis, as well as cognitive restructuring, emotional regulation, distress tolerance, counterconditioning (mindfulness), and contingency management (positive reinforcement). DBT is quite similar to acceptance and commitment therapy, but contrasts in that it derives from a CBT framework. Although DBT is most widely researched for and empirically validated to reduce the risk of suicide in psychiatric patients withborderline personality disorder, it can often be applied effectively to other mental health conditions, such as substance abuse, as well as mood and eating disorders. A study on BPD was conducted, confirming DBT as a constructive therapeutic option for emotionally unregulated patients. Before DBT, participants with borderline personality disorder were shown images of highly emotional people and neuron activity in theamygdalawas recorded viafMRI; after 1 year of consistent dialectical behavior therapy, participants were re-tested, with fMRI capturing a decrease in amygdala hyperactivity (emotional activation) in response to the applied stimulus, exhibiting increases in emotional regulation capabilities.[108] Most research on exposure therapies (also called desensitization)—ranging fromeye movement desensitization and reprocessing therapytoexposure and response prevention—are conducted through a CBT framework in non-behavior analytic journals, and these enhanced exposure therapies are well-established in the research literature for treating phobic,post-traumatic stress, and other anxiety disorders (such asobsessive-compulsive disorder, or OCD). Cognitive-based behavioral activation (BA)—the psychotherapeutic approach used for depression—is shown to be highly effective and is widely used in clinical practice. Some largerandomized control trialshave indicated that cognitive-based BA is as beneficial asantidepressantmedications but more efficacious than traditionalcognitive therapy. Other commonly used clinical treatments derived from behavioral learning principles that are often implemented through a CBT model include community reinforcement approach and family training, and habit reversal training for substance abuse and tics, respectively.
https://en.wikipedia.org/wiki/Behaviorism
Inmathematical logic,algebraic logicis the reasoning obtained by manipulating equations withfree variables. What is now usually called classical algebraic logic focuses on the identification and algebraic description ofmodelsappropriate for the study of various logics (in the form of classes of algebras that constitute thealgebraic semanticsfor thesedeductive systems) and connected problems likerepresentationand duality. Well known results like therepresentation theorem for Boolean algebrasandStone dualityfall under the umbrella of classical algebraic logic (Czelakowski 2003). Works in the more recentabstract algebraic logic(AAL) focus on the process of algebraization itself, like classifying various forms of algebraizability using theLeibniz operator(Czelakowski 2003). A homogeneousbinary relationis found in thepower setofX×Xfor some setX, while aheterogeneous relationis found in the power set ofX×Y, whereX≠Y. Whether a given relation holds for two individuals is onebitof information, so relations are studied with Boolean arithmetic. Elements of the power set are partially ordered byinclusion, and lattice of these sets becomes an algebra throughrelative multiplicationorcomposition of relations. "The basic operations are set-theoretic union, intersection and complementation, the relative multiplication, and conversion."[1] Theconversionrefers to theconverse relationthat always exists, contrary to function theory. A given relation may be represented by alogical matrix; then the converse relation is represented by thetransposematrix. A relation obtained as the composition of two others is then represented by the logical matrix obtained bymatrix multiplicationusing Boolean arithmetic. An example of calculus of relations arises inerotetics, the theory of questions. In the universe of utterances there arestatementsSandquestionsQ. There are two relationsπand α fromQtoS:qαaholds whenais a direct answer to questionq. The other relation,qπpholds whenpis apresuppositionof questionq. The converse relationπTruns fromStoQso that the compositionπTα is a homogeneous relation onS.[2]The art of putting the right question to elicit a sufficient answer is recognized inSocratic methoddialogue. The description of the key binary relation properties has been formulated with the calculus of relations. The univalence property of functions describes a relationRthat satisfies the formulaRTR⊆I,{\displaystyle R^{T}R\subseteq I,}whereIis the identity relation on the range ofR. The injective property corresponds to univalence ofRT{\displaystyle R^{T}}, or the formulaRRT⊆I,{\displaystyle RR^{T}\subseteq I,}where this timeIis the identity on the domain ofR. But a univalent relation is only apartial function, while a univalenttotal relationis afunction. The formula for totality isI⊆RRT.{\displaystyle I\subseteq RR^{T}.}Charles LoewnerandGunther Schmidtuse the termmappingfor a total, univalent relation.[3][4] The facility ofcomplementary relationsinspiredAugustus De MorganandErnst Schröderto introduceequivalencesusingR¯{\displaystyle {\bar {R}}}for the complement of relationR. These equivalences provide alternative formulas for univalent relations (RI¯⊆R¯{\displaystyle R{\bar {I}}\subseteq {\bar {R}}}), and total relations (R¯⊆RI¯{\displaystyle {\bar {R}}\subseteq R{\bar {I}}}). Therefore, mappings satisfy the formulaR¯=RI¯.{\displaystyle {\bar {R}}=R{\bar {I}}.}Schmidt uses this principle as "slipping below negation from the left".[5]For a mappingf,fA¯=fA¯.{\displaystyle f{\bar {A}}={\overline {fA}}.} Therelation algebrastructure, based in set theory, was transcended by Tarski with axioms describing it. Then he asked if every algebra satisfying the axioms could be represented by a set relation. The negative answer[6]opened the frontier ofabstract algebraic logic.[7][8][9] Algebraic logic treatsalgebraic structures, oftenbounded lattices, as models (interpretations) of certainlogics, making logic a branch oforder theory. In algebraic logic: In the table below, the left column contains one or morelogicalor mathematical systems, and the algebraic structure which are its models are shown on the right in the same row. Some of these structures are eitherBoolean algebrasorproper extensionsthereof.Modaland othernonclassical logicsare typically modeled by what are called "Boolean algebras with operators." Algebraic formalisms going beyondfirst-order logicin at least some respects include: Algebraic logic is, perhaps, the oldest approach to formal logic, arguably beginning with a number of memorandaLeibnizwrote in the 1680s, some of which were published in the 19th century and translated into English byClarence Lewisin 1918.[10]: 291–305But nearly all of Leibniz's known work on algebraic logic was published only in 1903 afterLouis Couturatdiscovered it in Leibniz'sNachlass.Parkinson (1966)andLoemker (1969)translated selections from Couturat's volume into English. Modern mathematical logic began in 1847, with two pamphlets whose respective authors wereGeorge Boole[11]andAugustus De Morgan.[12]In 1870Charles Sanders Peircepublished the first of several works on thelogic of relatives.Alexander Macfarlanepublished hisPrinciples of the Algebra of Logic[13]in 1879, and in 1883,Christine Ladd, a student of Peirce atJohns Hopkins University, published "On the Algebra of Logic".[14]Logic turned more algebraic whenbinary relationswere combined withcomposition of relations. For setsAandB, arelationoverAandBis represented as a member of thepower setofA×Bwith properties described byBoolean algebra. The "calculus of relations"[9]is arguably the culmination of Leibniz's approach to logic. At theHochschule Karlsruhethe calculus of relations was described byErnst Schröder.[15]In particular he formulatedSchröder rules, though De Morgan had anticipated them with his Theorem K. In 1903Bertrand Russelldeveloped the calculus of relations andlogicismas his version of pure mathematics based on the operations of the calculus asprimitive notions.[16]The "Boole–Schröder algebra of logic" was developed atUniversity of California, Berkeleyin atextbookbyClarence Lewisin 1918.[10]He treated the logic of relations as derived from thepropositional functionsof two or more variables. Hugh MacColl,Gottlob Frege,Giuseppe Peano, andA. N. Whiteheadall shared Leibniz's dream of combiningsymbolic logic,mathematics, andphilosophy. Some writings byLeopold LöwenheimandThoralf Skolemon algebraic logic appeared after the 1910–13 publication ofPrincipia Mathematica, and Tarski revived interest in relations with his 1941 essay "On the Calculus of Relations".[9] According toHelena Rasiowa, "The years 1920-40 saw, in particular in the Polish school of logic, researches on non-classical propositional calculi conducted by what is termed thelogical matrixmethod. Since logical matrices are certain abstract algebras, this led to the use of an algebraic method in logic."[17] Brady (2000)discusses the rich historical connections between algebraic logic andmodel theory. The founders of model theory, Ernst Schröder and Leopold Loewenheim, were logicians in the algebraic tradition.Alfred Tarski, the founder ofset theoreticmodel theory as a major branch of contemporary mathematical logic, also: In the practice of the calculus of relations,Jacques Riguetused the algebraic logic to advance useful concepts: he extended the concept of an equivalence relation (on a set) to the heterogeneous case with the notion of adifunctionalrelation. Riguet also extended ordering to the heterogeneous context by his note that a staircase logical matrix has a complement that is also a staircase, and that the theorem ofN. M. Ferrersfollows from interpretation of thetransposeof a staircase. Riguet generatedrectangular relationsby taking theouter productof logical vectors; these contribute to thenon-enlargeable rectanglesofformal concept analysis. Leibniz had no influence on the rise of algebraic logic because his logical writings were little studied before the Parkinson and Loemker translations. Our present understanding of Leibniz as a logician stems mainly from the work of Wolfgang Lenzen, summarized inLenzen (2004). To see how present-day work in logic andmetaphysicscan draw inspiration from, and shed light on, Leibniz's thought, seeZalta (2000). Historical perspective
https://en.wikipedia.org/wiki/Calculus_of_relations